Moses-support Digest, Vol 174, Issue 3

Send Moses-support mailing list submissions to
moses-support@mit.edu

To subscribe or unsubscribe via the World Wide Web, visit
http://mailman.mit.edu/mailman/listinfo/moses-support
or, via email, send a message with subject or body 'help' to
moses-support-request@mit.edu

You can reach the person managing the list at
moses-support-owner@mit.edu

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Moses-support digest..."


Today's Topics:

1. Special Issue - Creative and Generative Natural Language
Processing and Its Applications (Krzysztof Wo?k)
2. Final CFP: ACL-IJCNLP 3rd Workshop on Gender Bias for Natural
Language Processing (Marta Ruiz Costa-Jussa)


----------------------------------------------------------------------

Message: 1
Date: Thu, 8 Apr 2021 13:42:27 +0200
From: Krzysztof Wo?k <kwpjwstk@gmail.com>
Subject: [Moses-support] Special Issue - Creative and Generative
Natural Language Processing and Its Applications
To: Krzysztof Wo?k <kwpjwstk@gmail.com>
Message-ID: <024001d72c6c$40c19ec0$c244dc40$@gmail.com>
Content-Type: text/plain; charset="iso-8859-2"

Good morning,



I am the guest editor for a Special Issue entitled "Creative and Generative
Natural Language Processing and Its Applications" in an open access journal
"Electronics" (ISSN 2079-9292; CODEN: ELECGJ, IF 2.412). Given your renowned
expertise and significant contributions to this field, I would like to
invite you to contribute a research or review article to this issue.

More information can be found at:

https://www.mdpi.com/journal/electronics/special_issues/Creative_and_Generat
ive_NL_Processing

Author Benefits

* High Visibility-Indexed by various databases (including SCIE
(WoS), Scopus, and PubMed (PMC)).

* Fast Processing-A first decision is provided to authors
approximately 13.4 days after submission; acceptance to publication is then
undertaken within 3.4 days.

* Follow-Up Promotion-The Electronics promotion team provides a FREE
promotion service after paper publication to publicize your papers and
increase exposure to colleagues.

Article Processing Charge

The article processing charge (APC) for each accepted paper is 1800 Swiss
Francs (CHF). All submitted manuscripts will be handled immediately, and be
published online without waiting for other papers in the Special Issue once
it goes through peer-review.

Discount

1. A discount may apply if your institute has established an institutional
membership with MDPI. For more information, please see
http://www.mdpi.com/about/memberships.

2. If you have helped to review for MDPI journals and received a discount
voucher, you can use it toward this publication.

How to Submit

1. First-time users are required to register themselves before making
submissions at http://susy.mdpi.com/.

2. Enter your account and click "Submit Manuscript" under Submissions Menu.

3. Fill in manuscript details from Steps 1 to 4:

Journal: Electronics; Special Issue: Creative and Generative Natural
Language Processing and Its Applications

4. Click the "submit" button after you finish all the steps.

I hope you accept this invitation and share your innovative work with our
readers. If you need

more time to prepare your paper, please feel free to let me know.



Best regards,
Guest Editors
Dr. Krzysztof Wo?k

Dr. Ida Skubis

Dr. Tomasz Grzes



-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20210408/5a7108b9/attachment.html

------------------------------

Message: 2
Date: Mon, 19 Apr 2021 15:55:21 +0200
From: Marta Ruiz Costa-Jussa <marta.ruiz@upc.edu>
Subject: [Moses-support] Final CFP: ACL-IJCNLP 3rd Workshop on Gender
Bias for Natural Language Processing
To: Marta Ruiz Costa-Jussa <marta.ruiz@upc.edu>
Message-ID:
<CAJrQE+QO1q1o68kkej7RHh0AvJCkY4k8V=itWBsh6CWFXuK2rw@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

ACL-IJCNLP 3rd Workshop on Gender Bias for Natural Language Processing

http://genderbiasnlp.talp.cat

5-6 August, Bangkok, Thailand





Gender and other demographic biases (e.g. race, nationality, religion) in
machine-learned models are of increasing interest to the scientific
community and industry. Models of natural language are highly affected by
such biases, which are present in widely used products and can lead to poor
user experiences. There is a growing body of research into improved
representations of gender in NLP models. Popular approaches include
building and using balanced training and evaluation datasets (e.g. Reddy &
Knight, 2016, Webster et al., 2018, Maadan et al., 2018), and changing the
learning algorithms themselves (e.g. Bolukbasi et al., 2016, Chiappa et
al., 2018). While these approaches show promising results, there is more to
do to solve identified and future bias issues. In order to make progress as
a field, we need to create widespread awareness of bias and a consensus on
how to work against it, for instance by developing standard tasks and
metrics. Our workshop provides a forum to achieve this goal. Our workshop
follows up two successful previous editions of the Workshop collocated with
ACL 2019 and COLING 2020, respectively. Following the successful
introduction of bias statements at GeBNLP 2020, we continue to require bias
statements in this year?s workshops and will again ask the program
committee to engage with the bias statements in the papers they review.
This helps to make clear (a) what system behaviors are considered as bias
in the work, and (b) why those behaviors are harmful, in what ways, and to
whom. We encourage authors to engage with definitions of bias and other
relevant concepts such as prejudice, harm, discrimination from outside NLP,
especially from social sciences and normative ethics, in this statement and
in their work in general. Also, we will be keeping pushing the integration
of several communities such as social sciences as well as a wider
representation of approaches dealing with bias.





Topics of interest



We invite submissions of technical work exploring the detection,
measurement, and mediation of gender bias in NLP models and applications.
Other important topics are the creation of datasets exploring demographics
such as metrics to identify and assess relevant biases or focusing on
fairness in NLP systems. Finally, the workshop is also open to
non-technical work addressing sociological perspectives, and we strongly
encourage critical reflections on the sources and implications of bias
throughout all types of work.



Paper Submission Information



Submissions will be accepted as short papers (4-6 pages) and as long papers
(8-10 pages), plus additional pages for references, following the
ACL-IJCNLP 2021 guidelines. Supplementary material can be added, but should
not be central to the argument of the paper. Blind submission is required.

Each paper should include a statement that explicitly defines (a) what
system behaviors are considered as bias in the work and (b) why those
behaviors are harmful, in what ways, and to whom (cf. Blodgett et al. (2020)
<https://arxiv.org/abs/2005.14050>). More information on this requirement,
which was successfully introduced at GeBNLP 2020, can be found on the workshop
website
<https://genderbiasnlp.talp.cat/gebnlp2020/how-to-write-a-bias-statement/>.
We also encourage authors to engage with definitions of bias and other
relevant concepts such as prejudice, harm, discrimination from outside NLP,
especially from social sciences and normative ethics, in this statement and
in their work in general.


Important dates

April 26, 2021: Workshop Paper Due Date

May 28, 2021: Notification of Acceptance

June 7, 2021: Camera-ready papers due

August 5-6, 2021: Workshop Dates



Keynote

Sasha Luccioni, MILA, Canada



Programme Committee



Svetlana Kiritchenko, National Council Canada, Canada

Sharid Lo?iciga, University of Gothenburg, Sweden

Kaiji Lu, Carnegie Mellon University, US

Marta Recasens, Google, US

Bonnie Webber, University of Edinburgh, UK

Ben Hachey, Harrison.ai Australia

Mercedes Garc?a Mart?nez, Pangeanic, Spain

Sonja Schmer-Galunder, Smart Information Flow Technologies, US

Matthias Gall?, NAVER LABS Europe, France

Sverker Sikstr?m, Lund University, Sweden

Dirk Hovy, Bocconi University, Italy

Carla Perez Almendros, Cardiff University, UK

Jenny Bj?rklund, Uppsala University

Su Lin Blodgett, UMass Amherst

Will Radford, Canvas, Australia


Organizers



Marta R. Costa-juss?, Universitat Polit?cnica de Catalunya, Barcelona

Hila Gonen, Amazon

Christian Hardmeier, IT University of Copenhagen/Uppsala University

Kellie Webster, Google AI Language, New York



Contact persons

Marta R. Costa-juss?: marta (dot) ruiz (at) upc (dot) edu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20210419/b5fc080a/attachment.html

------------------------------

_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


End of Moses-support Digest, Vol 174, Issue 3
*********************************************

0 Response to "Moses-support Digest, Vol 174, Issue 3"

Post a Comment