NLP 280: Seminar in Natural Language Processing

About


NLP 280 is a seminar course that features talks from industry experts in the natural language processing (NLP) and artificial intelligence (AI) areas.


The speaker schedule may change without notice, due to changes in speaker availability.


Titles, abstracts, and speaker bios will be made available as the talk date approaches.


Some seminar slots do not have a speaker. Instead, the seminar time will be used for discussion.


Unless noted otherwise, the seminar meets weekly on Friday at 2:40 PM.

Seminar Schedule 


Date: 1/7/2022                                      Time: 2:40 PM PDT


Speaker: Gokhan Tur                           Affiliation: Amazon Alexa AI


Title: Past, Present, Future of Conversational AI


Abstract: Recent advances in deep learning based methods for language processing, especially using self-supervised learning methods resulted in new excitement towards building more sophisticated Conversational AI systems. While this is partially true for social chatbots or retrieval based applications, the underlying skeleton of the goal oriented systems has remained unchanged: Still most language understanding models rely on supervised methods with manually annotated datasets even though the resulting performances are significantly better with much less data. In this talk I will cover two directions we are exploring to break from this: The first approach is aiming to incorporate multimodal information for better understanding and semantic grounding. The second part introduces an interactive self-supervision method to gather immediate actionable user feedback converting frictional moments into learning opportunities for interactive learning.


Bio: Gokhan Tur is a leading artificial intelligence expert, especially on human/machine conversational language understanding systems. He co-authored about 200 papers published in journals or books and presented at conferences. He is the editor of the book entitled "Spoken Language Understanding" by Wiley in 2011. Between 1997 and 1999, he was a visiting scholar at the CMU LTI, then the Johns Hopkins University, and the Speech Lab of SRI, CA. At AT&T Research (formerly Bell Labs), NJ (2001-2006) he worked on pioneering conversational systems like "How May I Help You?". He worked for the DARPA GALE and CALO projects at SRI, CA (2006-2010). Gokhan was a founding member of the Microsoft Cortana team, and later the Conversational Systems Lab at Microsoft Research (2010-2016). He worked as the Conversational Understanding Architect at Apple Siri team (2014-2015) and as the Deep Conversational Understanding TLM at Google Research (2016-2018). He was a founding area director at Uber AI (2018-2020). Gokhan is currently with Amazon Alexa AI. He is the organizer of the HLT-NAACL 2007 Workshop on Spoken Dialog Technologies, and the HLT-NAACL 2004 and AAAI 2005 Workshops on SLU, and the editor of the Speech Communication Issue on SLU in 2006. Dr. Tur is also the recipient of the Speech Communication Journal Best Paper awards by ISCA for 2004-2006 and by EURASIP for 2005-2006. He is also the spoken language processing area chair for IEEE ICASSP 2007, 2008, and 2009 conferences and IEEE ASRU 2005 workshop, spoken dialog area chair for HLT-NAACL 2007 conference, and organizer of SLT 2010 workshop. Gokhan is a Fellow of IEEE, and member of ACL and ISCA. He was a member of IEEE Speech and Language Technical Committee (SLTC) (2006-2008), member of the IEEE SPS Industrial Relations Committee (2013-2014) and an associate editor for the IEEE Transactions on Audio, Speech, and Language Processing (2010-2014), and Multimedia Processing (2014-2016) journals.​


 


 


Date: 1/14/2022                                      Time: 2:40 PM PDT


Speaker: Arafat Sultan                           Affiliation: IBM


Title: Beyond Empirical Risk Minimization for QA: Data Augmentation, Knowledge Distillation and More


Abstract: I will talk about some supervision methods that I have used over the past couple of years to train powerful, robust and domain-agnostic question answering (QA) models. As the title suggests, data augmentation and knowledge distillation will be two central topics in the talk. The discussion will also include different forms and flavors of QA, including reading comprehension, information retrieval and multilingual QA.


Bio: Arafat Sultan is a Research Staff Member at IBM, working on multilingual NLP technologies. His primary area of focus is Question Answering. He joined IBM in 2016 after completing his PhD at the University of Colorado Boulder. His work in different areas of NLP, including natural language understanding and generation, question answering, and domain adaptation of NLP systems has been published at major NLP journals and conferences. 


 


 


Date: 1/21/2022                                    Time:  2:40 PM PDT


Speaker: Sujith Ravi                             Affiliation: SliceX AI


Title: Large-Scale Deep Learning with Structure


Abstract: Deep learning advances have enabled us to build high-capacity intelligent systems capable of perceiving and understanding the real world from text, speech and images. Yet, building real-world, scalable intelligent systems from “scratch” remains a daunting challenge as it requires us to deal with ambiguity, data sparsity and solve complex language & visual, dialog and generation problems.  In this talk, I will formalize some of these challenges involved with machine learning at scale. I will then introduce and describe our powerful neural graph learning framework, pre-cursor to widely-popular GNNs, that tackle these challenges by leveraging the power of deep learning combined with graphs which allow us to model the structure inherent in language and visual data. Our neural graph learning approach has been successfully used to power real-world applications at industry scale for response generation, image recognition and multimodal experiences. Finally, I will highlight our recent work on applying this to solve NLP tasks like Knowledge Graph reasoning and multi-document abstractive summarization.


Bio: Dr. Sujith Ravi is the Founder & CEO of SliceX AI. Previously, he was the Director of Amazon Alexa AI where he led efforts to build the future of multimodal conversational AI experiences at scale. Prior to that, he was leading and managing multiple ML and NLP teams and efforts in Google AI. He founded and headed Google’s large-scale graph-based semi-supervised learning platform, deep learning platform for structured and unstructured data as well as on-device machine learning efforts for products used by billions of people in Search, Ads, Assistant, Gmail, Photos, Android, Cloud and YouTube. These technologies power conversational AI (e.g., Smart Reply), Web and Image Search; On-Device predictions in Android and Assistant; and ML platforms like Neural Structured Learning in TensorFlow, Learn2Compress as Google Cloud service, TensorFlow Lite for edge devices. 


Dr. Ravi has authored over 100 scientific publications and patents in top-tier machine learning and natural language processing conferences. His work has been featured in press: Wired, Forbes, Forrester, New York Times, TechCrunch, VentureBeat, Engadget, New Scientist, among others, and also won the EACL Best Paper Award Honorable Mention in 2021, SIGDIAL Best Paper Award in 2019 and ACM SIGKDD Best Research Paper Award in 2014. For multiple years, he was a mentor for Google Launchpad startups. Dr. Ravi was the Co-Chair (AI and deep learning) for the 2019 National Academy of Engineering (NAE) Frontiers of Engineering symposium. He was also the Co-Chair for ACL 2021, EMNLP 2020, ICML 2019, NAACL 2019, and NeurIPS 2018 ML workshops and regularly serves as Senior/Area Chair and PC of top-tier machine learning and natural language processing conferences like NeurIPS, ICML, ACL, NAACL, AAAI, EMNLP, COLING, KDD, and WSDM. 






Website: www.sravi.org  


Twitter: @ravisujith  


LinkedIn: https://www.linkedin.com/in/sujithravi 



 


 


Date: 1/28/2022                                    Time: 2:40 PM PDT


Title: Discussion


 


 


Date: 2/4/2022                                    Time: 1:30 PM PDT


Speaker: Emily Dinan                         Affiliation: Facebook AI Research


Title: Anticipating Safety Issues in E2E Conversational AI


Abstract: Over the last several years, end-to-end neural conversational agents have vastly improved in their ability to carry a chit-chat conversation with humans. However, these models are often trained on large datasets from the internet, and as a result, may learn undesirable behaviors from this data, such as toxic or otherwise harmful language. In this talk, I will discuss the problem landscape for safety for E2E convAI, including recent and related work. I will highlight tensions between values, potential positive impact, and potential harms, and describe a possible path for moving forward.


Bio: Emily Dinan is a Research Engineer at Facebook AI Research in New York. Her research interests include conversational AI, natural language processing, and fairness and responsibility in these fields. Recently she has focused on methods for preventing conversational agents from reproducing biased, toxic, or otherwise harmful language. Prior to joining FAIR, she received her master's degree in Mathematics from the University of Washington.


 


 


Date: 2/11/2022                                    Time: 2:40 PM PDT

 

Speaker: Jason Weston                       Affiliation: Facebook AI Research

 

Title: A journey from ML & NNs to NLP and Beyond: Just more of the same isn't enough?

 

Abstract: The first half of the talk will look back on the last two decades of machine learning, neural network and natural language processing research for dialogue, through my personal lens, to discuss the advances that have been made and the circumstances in which they happened -- to try to give clues of what we should be working on for the future. The second half will dive deeper into some current first steps in those future directions, in particular trying to fix the problems of neural generative models to enable deeper reasoning with short and long-term coherence, and to ground such dialogue agents to an environment where they can act and learn. We will argue that just scaling up current techniques, while a worthy investigation, will not be enough to solve these problems.


Bio:  Jason Weston is a research scientist at Facebook, NY and a Visiting Research Professor at NYU. He earned his PhD in machine learning at Royal Holloway, University of London and at AT&T Research in Red Bank, NJ (advisors: Alex Gammerman, Volodya Vovk and Vladimir Vapnik) in 2000. From 2000 to 2001, he was a researcher at Biowulf technologies. From 2002 to 2003 he was a research scientist at the Max Planck Institute for Biological Cybernetics, Tuebingen, Germany. From 2003 to 2009 he was a research staff member at NEC Labs America, Princeton. From 2009 to 2014 he was a research scientist at Google, NY. His interests lie in statistical machine learning, with a focus on reasoning, memory, perception, interaction and communication. Jason has published over 100 papers, including best paper awards at ICML and ECML, and a Test of Time Award for his work "A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning", ICML 2008 (with Ronan Collobert). He was part of the YouTube team that won a National Academy of Television Arts & Sciences Emmy Award for Technology and Engineering for Personalized Recommendation Engines for Video Discovery. He was listed as the 16th most influential machine learning scholar at AMiner and one of the top 50 authors in Computer Science in Science.


 


 


Date: 2/18/2022                                    Time: 2:40 PM PDT


Title: Discussion


 


 


Date: 2/25/2022                                    Time: 2:40 PM PDT

 

Speaker: Michelle Zhou                        Affiliation: Juji

 

Title: Practical NLP Challenges in Powering No-Code Cognitive AI Assistants

 

Abstract: With the rapid advances of AI technologies and their applications, it is inevitable AI would become a big part of our personal and professional lives. While individuals and organizations wish to enlist AI’s help, not every individual or organization has the required skills or financial resources to create their own AI solutions. In this talk, Michelle will use the development of Cognitive AI Assistants as an example to describe the NLP challenges involved, highlight the practical solutions, and explain the roles that people in different fields could play on the road to democratizing AI. 


Bio: Dr. Michelle Zhou is a Co-founder and CEO of Juji, Inc., an Artificial Intelligence (AI) company that specializes in developing Cognitive Artificial Intelligence (AI) Assistants in the form of chatbots. She is an expert in the field of Human-Centered AI, an interdisciplinary area that intersects AI and Human-Computer Interaction (HCI). Zhou has authored more than 100 scientific publications on subjects including conversational AI, personality analytics, and interactive visual analytics of big data. Prior to founding Juji, she spent 15 years at IBM Research and the Watson Group, where she managed the research and development of Human-Centered AI technologies and solutions, including IBM Watson Personality Insights. Zhou serves as Editor-in-Chief of ACM Transactions on Interactive Intelligent Systems (TiiS) and an Associate Editor of ACM Transactions on Intelligent Systems and Technology (TIST), and was formerly the Steering Committee Chair for the ACM International Conference Series on Intelligent User Interfaces (IUI). She is an ACM Distinguished Member. https://www.linkedin.com/in/mxzhou/


 


 


Date: 3/4/2022                                    Time: 2:40 PM PDT

 

Speaker: Xiao Bai                               Affiliation: Yahoo! Inc.

 

Title: Neural keyphrase generation with layer-wise coverage attention

 

Abstract: Generating a set of keyphrases that summarize the core ideas discussed in a document has a significant impact on many neural language processing and information retrieval applications, such as sentiment analysis, question answering, document retrieval, document categorization and contextual advertising. In recent years, deep neural sequence-to-sequence framework has demonstrated promising results in automatic keyphrase generation. In this talk, I will discuss challenges related to this task and introduce our recently developed neural keyphrase generation model. I will also present the results of our model on real-world datasets from both the scientific domain and the web domain. Finally, I will briefly discuss the application of the keyphrase generation model in contextual advertising.


Bio: XIAO BAI is a Principle Research Scientist at Yahoo Research. She received her PhD in Computer Science from INRIA, France. Her research is primarily focused on information retrieval, natural language understanding, and their applications in online advertising. Her contributions to various domains of research have been published in top venues where she regularly serves as PC member, such as SIGIR, CIKM, WWW and KDD.


 


 


Date: 3/11/2021                                 Time: 2:40 PM PDT


Title: Discussion