CoNLL 2024

May 24, 2024

CoNLL is a yearly conference organized by SIGNLL (ACL's Special Interest Group on Natural Language Learning), focusing on theoretically, cognitively and scientifically motivated approaches to computational linguistics. This year, CoNLL will be colocated with EMNLP 2024. Registrations for CoNLL can be made through EMNLP (workshop 1).

Submission page available here.

Papers that have received reviews in current or previous ARR cycles can be committed to CoNLL 2024 here by August 30, 2024. 

Accepted papers

A list of papers that have been accepted for CoNLL 2024 is available here.

Program

The program for CoNLL 2024 is available here.

Call for Papers

SIGNLL invites submissions to the 28th Conference on Computational Natural Language Learning (CoNLL 2024). The focus of CoNLL is on theoretically, cognitively and scientifically motivated approaches to computational linguistics, rather than on work driven by particular engineering applications. Such approaches include:

  • Computational learning theory and other techniques for theoretical analysis of machine learning models for NLP
  • Models of first, second and bilingual language acquisition by humans
  • Models of sign language acquisition, understanding, and production
  • Models of language evolution and change
  • Computational simulation and analysis of findings from psycholinguistic and neurolinguistic experiments
  • Analysis and interpretation of NLP models, using methods inspired by cognitive science or linguistics or other methods
  • Data resources, techniques and tools for scientifically-oriented research in computational linguistics
  • Connections between computational models and formal languages or linguistic theories
  • Linguistic typology, translation, and other multilingual work
  • Theoretically, cognitively and scientifically motivated approaches to text generation

We welcome work targeting any aspect of language, including:

  • Speech and phonology
  • Syntax and morphology
  • Lexical, compositional and discourse semantics
  • Dialogue and interactive language use
  • Sociolinguistics
  • Multimodal and grounded language learning

We do not restrict the topic of submissions to fall into this list. However, the submissions’ relevance to the conference’s focus on theoretically, cognitively and scientifically motivated approaches will play an important role in the review process.

Submitted papers must be anonymous and use the EMNLP 2024 template. Submitted papers may consist of up to 8 pages of content plus unlimited space for references. Authors of accepted papers will have an additional page to address reviewers’ comments in the camera-ready version (9 pages of content in total, excluding references). Optional anonymized supplementary materials and a PDF appendix are allowed. The appendix should be submitted as a separate PDF file (reviewers are not required to consider the materials in the appendix so it should not include any essential content to the understanding of the paper). Please refer to the EMNLP 2024 Call for Papers for more details on the submission format. Note that, unlike EMNLP, we do not mandate that papers have a discussion section of the limitations of the work. However, we strongly encourage authors to have such a section in the appendix.

Please submit via Open Review. CoNLL 2024 will accept ARR submission depending on the full review to be completed by Jul 1, 2024. Please note that CoNLL 2024 is an in-person conference. We expect all accepted papers to be presented physically and presenting authors must register through EMNLP (workshop).</p>

Timeline
(All deadlines are 11:59pm UTC-12h, AoE)
Submission deadline: Monday July 1, 2024 (EXTENDED) Sunday, July 7, 2024
ARR Commitment deadline: Friday, August 30, 2024
Notification of acceptance: Friday, September 20 (DELAYED), Tuesday, September 24, 2024
Camera ready papers due: Friday, October 11, 2024
Conference: November 15 - 16, 2024

Venue
CoNLL 2024 will be held in-person, along with EMNLP in Miami, Florida.

Multiple submission policy
CoNLL 2024 will refuse papers that are currently under submission, or that will be submitted to other meetings or publications, including EMNLP. Papers submitted elsewhere and papers that overlap significantly in content or results with papers that will be (or have been) published elsewhere will be rejected. Authors submitting more than one paper to CoNLL 2024 must ensure that the submissions do not overlap significantly (>25%) with each other in content or results.

Information About Travel Visas
If you will be requiring travel visas to Miami, Florida, please fill out this form: Travel Visa Form

This has been prepared by the EMNLP organizers to facilitate the process of acquiring visas. If visas are needed, your information should be provided as early as possible. If you have more questions, please contact Mark Finlayson and Zoey Liu, who are the local chairs here: EMNLP Organizers

CoNLL 2024 Chairs and Organizers

The conference's co-chairs are:

Malihe Alikhani (Northeastern University, MA, USA)

Libby Barak (Montclair State university, NJ, USA)


Publication chairs:

Mert Inan (Northeastern University, MA, USA)


Julia Watson (University of Toronto, ON, Canada)

SIGNLL

  • SIGNLL President: Omri Abend (Hebrew University of Jerusalem, Israel)
  • SIGNLL Secretary: Antske Fokkens (Vrije Universiteit Amsterdam, Netherlands)

Invited speakers

Thamar Solorio (Mohamed bin Zayed University of Artificial  Intelligence, MBZUAI)

Title: Towards AI models that can help us to become better global social beings

Abstract: Cultural norms and values fundamentally shape our social interactions. Communication within any society reflects these cultural contexts. For example, while direct eye contact is often seen as a sign of confidence in many Western cultures, it may be viewed as disrespectful in other parts of the world. Moreover, human-human interactions include so much more than just the words we utter; nonverbal communication, including body language and other cues, provides rich signals to those around us.

As vision language models (VLMs)  are increasingly integrated into user-facing applications, it is becoming relevant to wonder if and to what extent this technology can robustly process these signals. My research group is interested in developing evaluation frameworks to assess the abilities of VLMs concerning interpreting social cues and in developing new approaches that can assist us and, perhaps, enhance our cross-cultural human-human interactions. 

 

Bio: Thamar Solorio is a professor in the NLP department at MBZUAI. She is also a tenured professor of Computer Science at the University of Houston. She is the director and founder of the RiTUAL Lab. Her research interests include NLP for low-resource settings and multilingual data, including code-switching and information extraction. More recently, she was moved towards language and vision problems, focusing on developing inclusive NLP. She received a National Science Foundation (NSF) CAREER award for her work on authorship attribution and was awarded the 2014 Emerging Leader ABIE Award in Honor of Denice Denton. She served two terms as an elected board member of the North American Chapter of the Association of Computational Linguistics (NAACL) and was PC co-chair for NAACL 2019. She is an Editor in Chief for the ACL Rolling Review (ARR) initiative and was a member of the advisory board for ARR. She serves as general chair for the 2024 Conference on Empirical Methods in Natural Language Processing.

 

 


Lorna Quandt (Gallaudet University)

Title: Integrating AI-Driven Sign Language Technologies in Education: Recognition, Generation, and Interaction

Abstract: This talk explores integrating AI-driven technologies in sign language research, covering the unique challenges of sign language recognition and generation. Dr. Quandt will explore these cutting-edge considerations through the lens of two research projects, ASL Champ! and BRIDGE. Both projects focus on sign language recognition and generation, which is crucial for advancing interaction in virtual and educational environments. ASL Champ! utilizes a dataset of 3D signs to enhance deep-learning-powered sign recognition in virtual reality. At the same time, BRIDGE extends this work by incorporating both recognition and generation of signs to create a more robust, interactive experience. This dual focus underscores the importance of pursuing recognition and generation in tandem rather than treating them as entirely distinct challenges. By leveraging advances in AI and natural language processing (NLP), we can create technologies that recognize and generate signs and facilitate deeper understanding and use of signed languages. These advancements hold great educational potential, particularly in providing more accessible tools for deaf students and enabling broader instruction in sign language. The talk will also address how these innovations can reshape the NLP field by widening the focus beyond spoken/written language and into multimodal, signed, and nonverbal aspects of language, which can inform all linguistic research.

Bio: Dr. Lorna Quandt is the Action & Brain Lab director at Gallaudet University in Washington, D.C. She serves as Co-Director of the VL2 Research Center alongside Melissa Malzkuhn. Dr. Quandt is an Associate Professor in the Ph.D. in Educational Neuroscience (PEN) program and the Science Director of the Motion Light Lab. Dr. Quandt founded the Action & Brain lab in early 2016. Before that, Dr. Quandt obtained her BA in Psychology from Haverford College and a PhD in Psychology, specializing in Brain & Cognitive Sciences, from Temple University. She completed a postdoctoral fellowship at the University of Pennsylvania, working with Dr. Anjan Chatterjee. Her research examines how knowledge of sign language changes perception, particularly visuospatial processing. Dr. Quandt is also pursuing the development of research-based educational technology to create new ways to learn signed languages in virtual reality.

 


Areas and ACs

  • Computational Psycholinguistics, Cognition and Linguistics: Nathan Schneider
  • Computational Social Science: Kate Atwell
  • Interaction and Grounded Language Learning: Anthony Sicilia
  • Lexical, Compositional and Discourse Semantics: Shira Wein
  • Multilingual Work and Translation: Yuval Marton
  • Natural Language Generation: Tuhin Chakrabarty
  • Resources and Tools for Scientifically Motivated Research: Venkat
  • Speech and Phonology: Huteng Dai
  • Syntax and Morphology: Leshem Choshen
  • Theoretical Analysis and Interpretation of ML Models for NLP: Kevin Small


    Sponsor


     

Webmaster: Jens Lemmens