Send Moses-support mailing list submissions to
moses-support@mit.edu
To subscribe or unsubscribe via the World Wide Web, visit
http://mailman.mit.edu/mailman/listinfo/moses-support
or, via email, send a message with subject or body 'help' to
moses-support-request@mit.edu
You can reach the person managing the list at
moses-support-owner@mit.edu
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Moses-support digest..."
Today's Topics:
1. Call for Papers: JNLE Special Issue on Representation of
Sentence Meaning (Ondrej Bojar)
----------------------------------------------------------------------
Message: 1
Date: Wed, 3 Jan 2018 15:34:15 +0100 (CET)
From: Ondrej Bojar <bojar@ufal.mff.cuni.cz>
Subject: [Moses-support] Call for Papers: JNLE Special Issue on
Representation of Sentence Meaning
To: moses-support@mit.edu, nmt-developers
<nmt-developers@googlegroups.com>
Cc: Bonnie Webber <bonnie@inf.ed.ac.uk>, Raffaella Bernardi
<bernardi@disi.unitn.it>, Holger Schwenk <schwenk@fb.com>
Message-ID:
<2112458282.1120564.1514990055315.JavaMail.zimbra@ufal.mff.cuni.cz>
Content-Type: text/plain; charset=utf-8
(apologies for multiple copies)
Preliminary Call for Papers:
JNLE Special Issue on Representation of Sentence Meaning
Representation of Sentence Meaning: Where Are We?
Details: http://ufal.mff.cuni.cz/jnle-on-sentence-representation/
This is a preliminary call for papers for a special issue of Natural
Language Engineering (JNLE) on Representation of sentence meaning.
Linguistically, the basic unit of meaning is a sentence. Sentence
meaning has been studied for centuries, offering up representations that
reflect properties (or theories) of the syntax-semantic boundary (e.g.,
FGD, MTT, AMR), to representations with the properties of complex, but
expressive logics (e.g. intensional logic). Recent success of neural
networks in natural language processing (especially at the lexical
level) has raised the possibility of representation learning of sentence
meaning, i.e. observing the continuous vector space in a hidden layer of
a deep learning system trained to perform one of more specific tasks.
Multiple workshops have explored this possibility in the past few years,
e.g. Workshop on Representation Learning for NLP (2016, 2017;
https://sites.google.com/site/repl4nlp2017/), Workshop on Evaluating
Vector Space Representations for NLP (2016, 2017;
https://repeval2017.github.io/), Representation Learning
(https://simons.berkeley.edu/workshops/machinelearning2017-2) or the
Dagstuhl seminar (http://www.dagstuhl.de/17042).
Interesting behaviour and properties of continuous representations have
been already observed. For lexical representations (embeddings), their
linear combination in word vector space has been taken to correspond to
different semantic relations between them (Mikolov et al., 2013).
Learned representations can be evaluated intrinsically in terms of
various similarities, although this type of evaluation suffers some well
known problems (Faruqui et al., 2016), or extrinsically in terms of
performance in downstream tasks or relation to cognitive processes (e.g.
Auguste et al., 2017).
Continuous representations of sentences are comparably harder to produce
and assess. The first question is whether the representation should be
of a fixed size as with word embeddings, or whether it should reflect
the length of the sentence, e.g. a matrix of encoder states along the
sentence. The variable-length representation can be flat or capture the
hierarchical structure of the sentence and simple operations such as
matrix multiplication can serve as the basis of meaning compositionality
(Socher et al., 2012). Empirical results to date are mixed:
bidirectional gated RNNs (BiLSTM, BiGRU) with attention, corresponding
to variable-length representations, seem the best empirical solution
when trained directly for a particular NLP task (POS tagging, named
entity recognition, syntactic parsing, reading comprehension, question
answering, text summarization, machine translation). If the task is not
to be constrained a priori, researchers have advocated universal
sentence representations, which can be trained on one task (e.g.
predicting surrounding sentences in Skip-Thoughts) and tested on a range
of others. Training universal sentence representations on sentence pairs
manually annotated for entailment (natural language inference, NLI)
leads to a better performance despite the much smaller training data
(Conneau et al., 2017). In both cases, there is a lack of analysis of
the learned vector space from the perspective of linguistic adequacy:
which phenomena are directly reflected in the space, if any? Semantic
similarity (paraphrasing)? Various oppositions? Gradations (in number,
tense)? Entailment? Compositionality (e.g. relations between main and
adjunct and/or subordinate clauses)?
TreeLSTMs have the capacity to learn a latent grammar when trained e.g.
to classify sentence pairs in terms of entailment. They seem to perform
well, and yet the representation that is learned does not conform to
traditional syntax or semantics (Williams at el., 2017).
The reason for proposing this special issue is that presentation and
discussion of sentence-level meaning representation is fragmented across
many fora (conferences, workshops, but also pre-prints only). We believe
that some unified vision is needed in order to support coherent future
research. The goal of the proposed special issue of Natural Language
Engineering is thus to broadly map the state of the art in continuous
sentence meaning representation and summarize the longer-term goals in
representing sentence meaning in general.
Can deep learning for particular tasks get us to representations similar
to the results of formal semantics? Or is a single formal definition of
sentence meaning and elusive goal, are universal sentence embeddings
impossible, e.g. because there is no such entity observable in human
cognition?
The special issue will seek long research papers, surveys and position
papers addressing primarily the following topics:
* Which properties of meaning representations are most desirable,
universally.
* Comparisons of types of meaning representations (e.g. fixed-size vs.
variable-length) and methods for learning them.
* Techniques of explorations of learned meaning representations.
* Evaluation methodologies for meaning representations, including
surveys thereof.
* Extrinsic evaluation by relations to cognitive processes.
* Relation between traditional symbolic meaning representations and the
learned continuous ones.
* Broad summaries of psycholinguistic evidence describing properties of
meaning representation in the human brain.
More details will be available at:
* http://ufal.mff.cuni.cz/jnle-on-sentence-representation/
Tentative Schedule:
* Spring 2018: Call for papers.
* July 2018: Abstract submission deadline (to allow preempting overlaps
of survey-like articles)
* October 2018: Submission deadline
* December 2018: Deadline for reviews and responses to authors
* February 2019: Camera-ready deadline
Guest Editors of the special issue:
* Ond?ej Bojar (Charles University)
* Raffaella Bernardi (University of Trento)
* Holger Schwenk (Facebook AI Research)
* Bonnie Webber (University of Edinburgh)
Preliminary Guest Editorial Board:
* Marco Baroni (Facebook AI Research, University of Trento)
* Bob Coecke (University of Oxford)
* Alexis Conneau (Facebook AI Research)
* Katrin Erk (University of Texas at Austin)
* Orhan Firat (Google)
* Albert Gatt (University of Malta)
* Caglar Gulcehre (Google)
* Aurelie Herbelot (Universitat Pompeu Fabra)
* Eva Maria Vecchi (University of Cambridge)
* Louise McNally (Universitat Pompeu Fabra)
* Laura Rimell (University of Cambridge / University of Oxford)
* Mernoosh Sadrzadeh (Queen Mary University of London)
* Hinrich Schuetze (Ludwig Maximilian University of Munich)
* Mark Steedman (University of Edinburgh)
* Ivan Titov (University of Edinburgh)
--
Ondrej Bojar (mailto:obo@cuni.cz / bojar@ufal.mff.cuni.cz)
http://www.cuni.cz/~obo
------------------------------
_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support
End of Moses-support Digest, Vol 135, Issue 3
*********************************************
Subscribe to:
Post Comments (Atom)
0 Response to "Moses-support Digest, Vol 135, Issue 3"
Post a Comment