Send Moses-support mailing list submissions to
moses-support@mit.edu
To subscribe or unsubscribe via the World Wide Web, visit
http://mailman.mit.edu/mailman/listinfo/moses-support
or, via email, send a message with subject or body 'help' to
moses-support-request@mit.edu
You can reach the person managing the list at
moses-support-owner@mit.edu
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Moses-support digest..."
Today's Topics:
1. updater in mosesserver (Lane Schwartz)
2. Ensemble of Neural Machine Translation systems (Nat Gillin)
3. EMS: filter-model-given-input.pl and threads (Tomasz Gawryl)
4. Re: Ensemble of Neural Machine Translation systems (Rico Sennrich)
----------------------------------------------------------------------
Message: 1
Date: Wed, 2 Nov 2016 16:28:05 -0500
From: Lane Schwartz <dowobeha@gmail.com>
Subject: [Moses-support] updater in mosesserver
To: "moses-support@mit.edu" <moses-support@mit.edu>
Message-ID:
<CABv3vZkw1=i7uSR=VjLCQOWM6BbU6ueXWpSy3Z+rxczo7X0mOA@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Hi,
I'm interested in adding some features that change for each sentence, and
using the XML-RPC mechanism to communicate the changes for each sentence to
be translated to mosesserver.
I've been looking through the code to see how much infrastructure already
exists for something like this. I found TranslationTask and ContextScope,
which Hieu tells me may be in use by Uli in the context of domain
adaptation? If Uli or others could chime in regarding their use, that would
be very helpful.
I also found some code relating to something called an "updater" in
moses/server/Server.cpp. Is that code used or described anywhere?
Thanks,
Lane
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20161102/3b97b6ce/attachment-0001.html
------------------------------
Message: 2
Date: Thu, 3 Nov 2016 10:05:06 +0800
From: Nat Gillin <nat.gillin@gmail.com>
Subject: [Moses-support] Ensemble of Neural Machine Translation
systems
To: moses-support@mit.edu
Message-ID:
<CAD2EOZjr2GxmpBcZ3SQkiCb6nsVit7SKSLc0fEuLzyreHOTFJg@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Dear Moses Community,
On recent papers, there has been much BLEU scores reported on ensemble of
neural machine translation systems. I would like to ask whether any one
know how are these ensembles created?
Is it some sort of averaged pooling layer at the end? Is it some sort of
voting of multiple system when the system is decoding at every time step?
Any pointers to papers describing this magical ensemble would be great =)
Most papers just say that, we ensemble, we beat Moses. Are there cases
where a single model beat Moses in a normal translation task without
ensembling?
Regards,
Nat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20161102/1cd9a812/attachment-0001.html
------------------------------
Message: 3
Date: Thu, 3 Nov 2016 14:48:07 +0100
From: "Tomasz Gawryl" <tomasz.gawryl@skrivanek.pl>
Subject: [Moses-support] EMS: filter-model-given-input.pl and threads
To: <moses-support@mit.edu>
Message-ID: <00ec01d235d8$e8a98ec0$b9fcac40$@skrivanek.pl>
Content-Type: text/plain; charset="us-ascii"
Hi All,
I'm trying to create compact phrase table during BilingualLM training. It
works fine but there is one moment when process slows down.
I set up 18 threads for TRAINING:binarize-config step :
ttable-binarizer = "$moses-bin-dir/processPhraseTableMin -threads 18"
But one of scripts (filter-model-given-input.pl) adds own default setting
"threads 1" what overrides 18 threads:
moses 8081 0.0 0.0 24840 6976 pts/5 SN 11:13 0:00 perl
/home/moses/src/mosesdecoder/scripts/training/binarize-model.perl
/home/moses/working/experiments/EN-PL/BilingualLM/model/moses.ini.1
/home/moses/working/experiments/EN-PL/BilingualLM/model/moses.bin.ini.2
-Binarizer /home/moses/src/mosesdecoder/bin/processPhraseTableMin -threads
18
moses 8083 0.0 0.0 20984 7120 pts/5 SN 11:13 0:00 perl
/home/moses/src/mosesdecoder/scripts/training/filter-model-given-input.pl
/home/moses/working/experiments/EN-PL/BilingualLM/model/moses.bin.ini.2.tabl
es /home/moses/working/experiments/EN-PL/BilingualLM/model/moses.ini.1
/dev/null -nofilter -Binarizer
/home/moses/src/mosesdecoder/bin/processPhraseTableMin -threads 18
moses 31792 126 2.6 2357912 1996868 pts/5 SNl 13:48 46:44
/home/moses/src/mosesdecoder/bin/processPhraseTableMin -threads 18 -in
/home/moses/working/experiments/EN-PL/BilingualLM/model/moses.bin.ini.2.tabl
es/phrase-table.0-0.1.1.gz.sorted.gz -out
/home/moses/working/experiments/EN-PL/BilingualLM/model/moses.bin.ini.2.tabl
es/phrase-table.0-0.1.1 -nscores 4 -threads 1
My question is : should it be as it is or I made some error in
configuration?
Regards,
Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20161103/7ecb6b4c/attachment-0001.html
------------------------------
Message: 4
Date: Thu, 3 Nov 2016 14:50:43 +0000
From: Rico Sennrich <rico.sennrich@gmx.ch>
Subject: Re: [Moses-support] Ensemble of Neural Machine Translation
systems
To: moses-support@mit.edu
Message-ID: <1ad18782-2353-e10a-d62b-9ce45d95e30a@gmx.ch>
Content-Type: text/plain; charset="windows-1252"
Hello Nat,
for NMT ensembles, you just average the probability distribution of
different models at each time step before selecting the next hypothesis
(or hypotheses in beam search). If you're familiar with Moses, this is
similar to what happens when we combine different feature functions in
the log-linear, global model. That's also why I don't think the
comparison of a neural network ensemble to Moses is unfair in principle
- both combine various models to obtain the final translation
probablities - the Moses phrase table alone has (at least) four.
Our official submissions to WMT16 are ensembles, but even our single
systems outperform non-neural submissions for EN->DE, EN->RO, EN->CS and
DE->EN (in terms of BLEU).
best wishes,
Rico
On 03/11/16 02:05, Nat Gillin wrote:
> Dear Moses Community,
>
> On recent papers, there has been much BLEU scores reported on ensemble
> of neural machine translation systems. I would like to ask whether any
> one know how are these ensembles created?
>
> Is it some sort of averaged pooling layer at the end? Is it some sort
> of voting of multiple system when the system is decoding at every time
> step?
>
> Any pointers to papers describing this magical ensemble would be great =)
>
> Most papers just say that, we ensemble, we beat Moses. Are there cases
> where a single model beat Moses in a normal translation task without
> ensembling?
>
> Regards,
> Nat
>
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20161103/507f0199/attachment.html
------------------------------
_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support
End of Moses-support Digest, Vol 121, Issue 19
**********************************************
Subscribe to:
Post Comments (Atom)
0 Response to "Moses-support Digest, Vol 121, Issue 19"
Post a Comment