Moses-support Digest, Vol 164, Issue 5

Send Moses-support mailing list submissions to
moses-support@mit.edu

To subscribe or unsubscribe via the World Wide Web, visit
http://mailman.mit.edu/mailman/listinfo/moses-support
or, via email, send a message with subject or body 'help' to
moses-support-request@mit.edu

You can reach the person managing the list at
moses-support-owner@mit.edu

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Moses-support digest..."


Today's Topics:

1. CFP: COLING 2nd Workshop on Gender Bias for Natural Language
Processing (Marta Ruiz Costa-Jussa)


----------------------------------------------------------------------

Message: 1
Date: Tue, 16 Jun 2020 12:27:58 +0200
From: Marta Ruiz Costa-Jussa <marta.ruiz@upc.edu>
Subject: [Moses-support] CFP: COLING 2nd Workshop on Gender Bias for
Natural Language Processing
To: Marta Ruiz Costa-Jussa <marta.ruiz@upc.edu>
Message-ID:
<CAJrQE+R-jHVC0RMCRG1_dgucC+8=4YUM4t1u4K9=DE4Dp3cnig@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

COLING 2nd Workshop on Gender Bias for Natural Language Processing

http://genderbiasnlp.talp.cat

13th December, Barcelona





Gender and other demographic biases in machine-learned models are of
increasing interest to the scientific community and industry. Models of
natural language are highly affected by such biases, which are present in
widely used products and can lead to poor user experiences. There is a
growing body of research into improved representations of gender in NLP
models. Key example approaches are to build and use balanced training and
evaluation datasets (e.g. Reddy & Knight, 2016, Webster et al., 2018,
Maadan et al., 2018), and to change the learning algorithms themselves
(e.g. Bolukbasi et al., 2016, Chiappa et al., 2018). While these approaches
show promising results, there is more to do to solve identified and future
bias issues. In order to make progress as a field, we need to create
widespread awareness of bias and a consensus on how to work against it, for
instance by developing standard tasks and metrics. Our workshop provides a
forum to achieve this goal.





Topics of interest



We invite submissions of technical work exploring the detection,
measurement, and mediation of gender bias in NLP models and applications.
Other important topics are the creation of datasets exploring demographics
such as metrics to identify and assess relevant biases or focusing on
fairness in NLP systems. Finally, the workshop is also open to
non-technical work addressing sociological perspectives, and we strongly
encourage critical reflections on the sources and implications of bias
throughout all types of work.



Paper Submission Information



Submissions will be accepted as short papers (4-6 pages) and as long papers
(8-10 pages), plus additional pages for references, following the COLING
2020 guidelines. Supplementary material can be added. Blind submission is
required.

This year, we introduce the requirement that papers include a statement
which explicitly defines (a) what system behaviours are considered as bias
in the work and (b) why those behaviours are harmful, in what ways, and to
whom (cf. Blodgett et al. (2020) <https://arxiv.org/abs/2005.14050>). We
encourage authors to engage with definitions of bias and other relevant
concepts such as prejudice, harm, discrimination from outside NLP,
especially from social sciences and normative ethics, in this statement and
in their work in general.

Paper submission link: https://www.softconf.com/coling2020/GeBNLP/

Important dates

Aug 4. Anonymity period begins

Sep 4. Deadline for submission

Oct 9. Notification of acceptance

Nov 1. Camera-ready submission



Keynote

Natalie Schluter, IT University of Copenhagen, Denmark

Dirk Hovy, Bocconi University, Italy



Programme Committee



Svetlana Kiritchenko, National Research Council of Canada, Canada

Kai-Wei Chang, University of Washington, US

Sharid Lo?iciga, University of Gothenburg, Sweden

Zhengxian Gong, Soochow University, China

Marta Recasens, Google, US

Bonnie Webber, University of Edinburgh, UK

Ben Hachey, Harrison.ai Australia

Mercedes Garc?a Mart?nez, Pangeanic, Spain

Sonja Schmer-Galunder, Smart Information Flow Technologies, US

Matthias Gall?, NAVER LABS Europe, France

Sverker Sikstr?m, Lund University, Sweden

Dorna Behdadi, University of Gothenburg, Sweden

Steve Wilson, University of Edinburgh, UK

Kathleen Siminyu, Artificial Intelligence for Development ? Africa Network

Dirk Hovy, Bocconi University, Italy

Carla P?rez Almendros, Cardiff University, UK

Jenny Bj?rklund, Uppsala University, Sweden



Organizers



Marta R. Costa-juss?, Universitat Polit?cnica de Catalunya, Barcelona

Christian Hardmeier, Uppsala University

Kellie Webster, Google AI Language, New York

Will Radford, Canva, Sydney



Contact persons

Marta R. Costa-juss?: marta (dot) ruiz (at) upc (dot) edu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20200616/760d895e/attachment.html

------------------------------

_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


End of Moses-support Digest, Vol 164, Issue 5
*********************************************

0 Response to "Moses-support Digest, Vol 164, Issue 5"

Post a Comment