Moses-support Digest, Vol 113, Issue 18

Send Moses-support mailing list submissions to
moses-support@mit.edu

To subscribe or unsubscribe via the World Wide Web, visit
http://mailman.mit.edu/mailman/listinfo/moses-support
or, via email, send a message with subject or body 'help' to
moses-support-request@mit.edu

You can reach the person managing the list at
moses-support-owner@mit.edu

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Moses-support digest..."


Today's Topics:

1. WMT 2016 Shared Task on Quality Estimation - Call for
Participation (Lucia Specia)
2. ** DEADLINE APPROACHING ** FINAL CFP: Computer Speech and
Language Special Issue on Deep Learning for Machine Translation
(Marta Ruiz)
3. Re: Scripts for n-best-list rescoring (Lane Schwartz)
4. Re: Scripts for n-best-list rescoring (Philipp Koehn)


----------------------------------------------------------------------

Message: 1
Date: Tue, 8 Mar 2016 10:49:00 +0000
From: Lucia Specia <lspecia@gmail.com>
Subject: [Moses-support] WMT 2016 Shared Task on Quality Estimation -
Call for Participation
To: mt-list@eamt.org, corpora@uib.no, moses-support@mit.edu
Message-ID:
<CAALeUxwWB-CKSb4UcUeoB3K1EZRq=PSKFW35qY-UtrPrZxyLpw@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

CALL FOR PARTICIPATION

========================================================
WMT 2016 Shared Task on Quality Estimation
========================================================
At WMT 2016 (collocated with ACL 2016)

*News for 2016*
1) a phrase-level prediction task
2) a document-level prediction task with full documents and labels based on
post-editing
3) large professionally created post-editing data (15K) on technical domain
for word, phrase and sentence-level prediction


*Important dates*Release of training data - done
Release of test data - April 2016
QE metrics results submission - April 2016
Paper submission - May 8, 2016

Check the* website* for details:
http://www.statmt.org/wmt16/quality-estimation-task.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20160308/84b657ea/attachment-0001.html

------------------------------

Message: 2
Date: Tue, 8 Mar 2016 11:49:59 +0100
From: Marta Ruiz <martaruizcostajussa@gmail.com>
Subject: [Moses-support] ** DEADLINE APPROACHING ** FINAL CFP:
Computer Speech and Language Special Issue on Deep Learning for
Machine Translation
To: moses-support@mit.edu
Message-ID:
<CABEBqHLNVzStppw308eiyEYSPKs-xuBhhtA6Ffei_nPJchGV-A@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

*Computer Speech and Language Special Issue on Deep Learning for Machine
Translation
<http://www.journals.elsevier.com/computer-speech-and-language/call-for-papers/special-issue-on-deep-learning-for-machine-translation/>*



Deep Learning has been successfully applied to many areas including Natural
Language Processing, Speech Recognition and Image Processing. Deep learning
techniques have surprised the entire community both academy and industry by
powerfully learning from data.



Recently, deep learning has been introduced to Machine Translation (MT). It
first started as a kind of feature which was integrated in standard phrase
or syntax-based statistical approaches. Deep learning has been shown useful
in translation and language modeling as well as in reordering, tuning and
rescoring. Additionally, deep learning has been applied to MT evaluation
and quality estimation.



But the biggest impact on MT appeared with the new paradigm proposal:
Neural MT, which has just recently (in the Workshop of Machine Translation
2015) outperformed state-of-the-art systems. This new approach uses an
autoencoder architecture to build a neural system that is capable of
translating. With the new approach, the new big MT challenges lie on how to
deal with large vocabularies, document translation and computational power
among others*.*



This hot topic is raising interest from the scientific community and as a
response there have been several related events (i.e. tutorial[1]
<#-728715324_-1975383620_-1177276990_1928584617_684013530__ftn1> and winter
school[2] <#-728715324_-1975383620_-1177276990_1928584617_684013530__ftn2>).
Moreover, the number of publications on this topic in top conferences such
as ACL, NAACL, EMNLP has dramatically increased in the last three years.
This would be the first special issue related to the topic. With this
special issue, we pretend to offer a compilation of works that give the
reader a global vision of how the deep learning techniques are applied to
MT and what new challenges offers.



This Special Issue expects high quality submissions on the following topics
(but not limited):

? Including deep learning knowledge in standard MT approaches (statistical,
rule-based, example-based...)

? Neural MT approaches

? MT hybrid techniques using deep learning

? Deep learning challenges in MT: vocabulary limitation, document
translation, computational power

? MT evaluation with deep learning techniques

? MT quality estimation with deep learning techniques

? Using deep learning in spoken language translation



*IMPORTANT DATES*

Submission deadline: 30th March 2016

Notification of rejection/re-submission: 30th July 2016

Notification of final acceptance: 30th October 2016

Expected publication date: 30th January 2017


*GUEST EDITORS*

Marta R. Costa-juss?, Universitat Polit?cnica de Catalunya, Spain.
marta.ruiz@upc.edu

Alexandre Allauzen, Centre National de la Recherche Scientifique, France.
allauzen@limsi.fr

Lo?c Barrault, Universit? du Maine, France.
loic.barrault@lium.univ-lemans.fr

Kyunghyun Cho, New York University, USA. kyunghyun.cho@nyu.edu

Holger Schwenk, Facebook, USA. schwenk@fb.com



------------------------------

[1] <#-728715324_-1975383620_-1177276990_1928584617_684013530__ftnref1>
http://naacl.org/naacl-hlt-2015/tutorial-deep-learning.html

[2] <#-728715324_-1975383620_-1177276990_1928584617_684013530__ftnref2>
http://dl4mt.computing.dcu.ie/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20160308/5a2f17a7/attachment-0001.html

------------------------------

Message: 3
Date: Tue, 8 Mar 2016 07:18:17 -0600
From: Lane Schwartz <dowobeha@gmail.com>
Subject: Re: [Moses-support] Scripts for n-best-list rescoring
To: Marcin Junczys-Dowmunt <junczys@amu.edu.pl>
Cc: moses-support <moses-support@mit.edu>
Message-ID:
<CABv3vZk493K+-Ftq3_6KAZ30tPeJoF04+E36fArKY5C_9XzrCA@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

I don't think there is. At my previous lab, I believe we had to build our
own in-house script. It would be nice to have one in moses.

On Sat, Oct 31, 2015 at 12:56 PM, Marcin Junczys-Dowmunt <junczys@amu.edu.pl
> wrote:

> Hi,
> does moses include scripts for n-best-list rescoring/resorting after a
> new feature has been added to the list?
>
> I guess, this can probably be achieved by running a single parameter
> tuning step on the extended n-best-list, but then I still need to fiddle
> around with calculating model scores with the new weights etc. Is there
> anything public and working with the moses n-best-list format?
>
> Cheers,
> Marcin
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>



--
When a place gets crowded enough to require ID's, social collapse is not
far away. It is time to go elsewhere. The best thing about space travel
is that it made it possible to go elsewhere.
-- R.A. Heinlein, "Time Enough For Love"
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20160308/980a62f4/attachment-0001.html

------------------------------

Message: 4
Date: Tue, 8 Mar 2016 08:25:23 -0500
From: Philipp Koehn <phi@jhu.edu>
Subject: Re: [Moses-support] Scripts for n-best-list rescoring
To: Lane Schwartz <dowobeha@gmail.com>
Cc: moses-support <moses-support@mit.edu>
Message-ID:
<CAAFADDCw_sjAboJmfxci0V_xmv5eaHB6+=M0FHMX3ZCdwkM10Q@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hi,

there is this mysterious check-in:

Commit: c6314d927d8b7b638eca387f31ccfab7facb6624

https://github.com/moses-smt/mosesdecoder/commit/c6314d927d8b7b638eca387f31ccfab7facb6624
Author: Michael Denkowski <mdenkows@amazon.com>
Date: 2016-02-23 (Tue, 23 Feb 2016)

Changed paths:
A scripts/nbest-rescore/README.md
A scripts/nbest-rescore/rescore.py
A scripts/nbest-rescore/topbest.py
A scripts/nbest-rescore/train.py

-phi

On Tue, Mar 8, 2016 at 8:18 AM, Lane Schwartz <dowobeha@gmail.com> wrote:

> I don't think there is. At my previous lab, I believe we had to build our
> own in-house script. It would be nice to have one in moses.
>
> On Sat, Oct 31, 2015 at 12:56 PM, Marcin Junczys-Dowmunt <
> junczys@amu.edu.pl> wrote:
>
>> Hi,
>> does moses include scripts for n-best-list rescoring/resorting after a
>> new feature has been added to the list?
>>
>> I guess, this can probably be achieved by running a single parameter
>> tuning step on the extended n-best-list, but then I still need to fiddle
>> around with calculating model scores with the new weights etc. Is there
>> anything public and working with the moses n-best-list format?
>>
>> Cheers,
>> Marcin
>> _______________________________________________
>> Moses-support mailing list
>> Moses-support@mit.edu
>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>
>
>
>
> --
> When a place gets crowded enough to require ID's, social collapse is not
> far away. It is time to go elsewhere. The best thing about space travel
> is that it made it possible to go elsewhere.
> -- R.A. Heinlein, "Time Enough For Love"
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20160308/8683669c/attachment.html

------------------------------

_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


End of Moses-support Digest, Vol 113, Issue 18
**********************************************

Related Posts :

0 Response to "Moses-support Digest, Vol 113, Issue 18"

Post a Comment