Send Moses-support mailing list submissions to
moses-support@mit.edu
To subscribe or unsubscribe via the World Wide Web, visit
http://mailman.mit.edu/mailman/listinfo/moses-support
or, via email, send a message with subject or body 'help' to
moses-support-request@mit.edu
You can reach the person managing the list at
moses-support-owner@mit.edu
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Moses-support digest..."
Today's Topics:
1. Re: Some addition suggestions to English non-breaking
prefixes (Hieu Hoang)
2. NPLM and BilingualNPLM not working as expected in Moses
(Raj Dabre)
3. Re: NPLM and BilingualNPLM not working as expected in Moses
(Rico Sennrich)
4. Re: NPLM and BilingualNPLM not working as expected in Moses
(Raj Dabre)
----------------------------------------------------------------------
Message: 1
Date: Mon, 6 Jul 2015 14:02:24 +0400
From: Hieu Hoang <hieuhoang@gmail.com>
Subject: Re: [Moses-support] Some addition suggestions to English
non-breaking prefixes
To: Ozan ?a?layan <ozancag@gmail.com>, moses-support@mit.edu
Message-ID: <559A5230.7050701@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
thanks ozan,
Sometimes these appreviations they could appear at the end of a sentence
so I'm not sure what the effect it would have.
I'll leave it to people to discuss before adding them
On 02/07/2015 23:42, Ozan ?a?layan wrote:
> Jr // Junior
> JR
> etc
> Etc
> Inc // incorporation
> ed // edition
--
Hieu Hoang
Researcher
New York University, Abu Dhabi
http://www.hoang.co.uk/hieu
------------------------------
Message: 2
Date: Mon, 6 Jul 2015 22:29:06 +0900
From: Raj Dabre <prajdabre@gmail.com>
Subject: [Moses-support] NPLM and BilingualNPLM not working as
expected in Moses
To: "moses-support@mit.edu" <moses-support@mit.edu>
Message-ID:
<CAB3gfjBRWEy447j=UOefQUbNZEn46JSG7MLAxhQi4K9sB+oy+Q@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Dear all,
I have checked out the latest version of moses and nplm and compiled moses
successfully with the --with-nplm option.
I got a ton of warnings during compilation but in the end it all worked out
and all the desired binaries were created. Simply executing the moses
binary told me the the BilingualNPLM and NeuralLM features were available.
I trained an NPLM model based on the instructions here:
http://www.statmt.org/moses/?n=FactoredTraining.BuildingLanguageModel#ntoc33
The corpus size I used was about 600k lines (for Chinese-Japanese; Target
is Japanese)
I then integrated the resultant language model (after 10 iterations) into
the decoding process by moses.ini
I initiated tuning (standard parameters) and I got no errors, which means
that the neural language model (NPLM) was recognized and queried
appropriately.
I also ran tuning without a language model.
The strange thing is that the tuning and test BLEU scores for both these
cases are almost the same. I checked the weights and saw that the LM was
assigned a very low weight.
On the other hand when I used KENLM on the same data.... I had
comparatively higher BLEU scores.
Am I missing something? Am I using the NeuralLM in an incorrect way?
Thanks in advance.
--
Raj Dabre.
Doctoral Student,
Graduate School of Informatics,
Kyoto University.
CSE MTech, IITB., 2011-2014
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20150706/bfa4ab99/attachment-0001.htm
------------------------------
Message: 3
Date: Mon, 06 Jul 2015 14:53:56 +0100
From: Rico Sennrich <rico.sennrich@gmx.ch>
Subject: Re: [Moses-support] NPLM and BilingualNPLM not working as
expected in Moses
To: moses-support@mit.edu
Message-ID: <559A8874.50602@gmx.ch>
Content-Type: text/plain; charset="windows-1252"
Hello Raj,
can you please clarify if you tried to train a monolingual LM
(NeuralLM), a bilingual LM (BilingualNPLM), or both? Our previous
experiences with BilingualNPLM are mixed, and we observed improvements
for some tasks and language pairs, but not for others. See for instance:
Alexandra Birch, Matthias Huck, Nadir Durrani, Nikolay Bogoychev and
Philipp Koehn. 2014. Edinburgh SLT and MT System Description for the
IWSLT 2014 Evaluation. Proceedings of IWSLT 2014.
To help debugging, you can check the scores in the n-best lists of the
tuning runs. If the NPLM features give much higher costs than KenLM
(trained on the same data), this can indicate that something went wrong
during training.
best wishes,
Rico
On 06.07.2015 14:29, Raj Dabre wrote:
> Dear all,
> I have checked out the latest version of moses and nplm and compiled
> moses successfully with the --with-nplm option.
> I got a ton of warnings during compilation but in the end it all
> worked out and all the desired binaries were created. Simply executing
> the moses binary told me the the BilingualNPLM and NeuralLM features
> were available.
>
> I trained an NPLM model based on the instructions here:
> http://www.statmt.org/moses/?n=FactoredTraining.BuildingLanguageModel#ntoc33
> The corpus size I used was about 600k lines (for Chinese-Japanese;
> Target is Japanese)
>
> I then integrated the resultant language model (after 10 iterations)
> into the decoding process by moses.ini
>
> I initiated tuning (standard parameters) and I got no errors, which
> means that the neural language model (NPLM) was recognized and queried
> appropriately.
> I also ran tuning without a language model.
>
> The strange thing is that the tuning and test BLEU scores for both
> these cases are almost the same. I checked the weights and saw that
> the LM was assigned a very low weight.
>
> On the other hand when I used KENLM on the same data.... I had
> comparatively higher BLEU scores.
>
> Am I missing something? Am I using the NeuralLM in an incorrect way?
>
> Thanks in advance.
>
>
>
> --
> Raj Dabre.
> Doctoral Student,
> Graduate School of Informatics,
> Kyoto University.
> CSE MTech, IITB., 2011-2014
>
>
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20150706/a1f82eb4/attachment-0001.htm
------------------------------
Message: 4
Date: Tue, 7 Jul 2015 00:31:52 +0900
From: Raj Dabre <prajdabre@gmail.com>
Subject: Re: [Moses-support] NPLM and BilingualNPLM not working as
expected in Moses
To: Rico Sennrich <rico.sennrich@gmx.ch>
Cc: "moses-support@mit.edu" <moses-support@mit.edu>
Message-ID:
<CAB3gfjBXxLc2Eh=AN+VEgfxUdam33Aj8CpEzMgvVYKUNasA4zA@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Hello Rico,
I trained both mono as well as bilingual LM's.
Both seemed ineffective.
As I mentioned before, I am working with Chinese-Japanese and the domain is
paper abstracts.
I did check the n-best lists and I saw a significant difference between the
LM scores when comparing the runs for KenLm and NPLM.
What could have gone wrong during the training?
Regards.
On Mon, Jul 6, 2015 at 10:53 PM, Rico Sennrich <rico.sennrich@gmx.ch> wrote:
> Hello Raj,
>
> can you please clarify if you tried to train a monolingual LM (NeuralLM),
> a bilingual LM (BilingualNPLM), or both? Our previous experiences with
> BilingualNPLM are mixed, and we observed improvements for some tasks and
> language pairs, but not for others. See for instance:
>
> Alexandra Birch, Matthias Huck, Nadir Durrani, Nikolay Bogoychev and
> Philipp Koehn. 2014. Edinburgh SLT and MT System Description for the IWSLT
> 2014 Evaluation. Proceedings of IWSLT 2014.
>
> To help debugging, you can check the scores in the n-best lists of the
> tuning runs. If the NPLM features give much higher costs than KenLM
> (trained on the same data), this can indicate that something went wrong
> during training.
>
> best wishes,
> Rico
>
> On 06.07.2015 14:29, Raj Dabre wrote:
>
> Dear all,
> I have checked out the latest version of moses and nplm and compiled
> moses successfully with the --with-nplm option.
> I got a ton of warnings during compilation but in the end it all worked
> out and all the desired binaries were created. Simply executing the moses
> binary told me the the BilingualNPLM and NeuralLM features were available.
>
> I trained an NPLM model based on the instructions here:
> http://www.statmt.org/moses/?n=FactoredTraining.BuildingLanguageModel#ntoc33
> The corpus size I used was about 600k lines (for Chinese-Japanese; Target
> is Japanese)
>
> I then integrated the resultant language model (after 10 iterations) into
> the decoding process by moses.ini
>
> I initiated tuning (standard parameters) and I got no errors, which means
> that the neural language model (NPLM) was recognized and queried
> appropriately.
> I also ran tuning without a language model.
>
> The strange thing is that the tuning and test BLEU scores for both these
> cases are almost the same. I checked the weights and saw that the LM was
> assigned a very low weight.
>
> On the other hand when I used KENLM on the same data.... I had
> comparatively higher BLEU scores.
>
> Am I missing something? Am I using the NeuralLM in an incorrect way?
>
> Thanks in advance.
>
>
>
> --
> Raj Dabre.
> Doctoral Student,
> Graduate School of Informatics,
> Kyoto University.
> CSE MTech, IITB., 2011-2014
>
>
>
> _______________________________________________
> Moses-support mailing listMoses-support@mit.eduhttp://mailman.mit.edu/mailman/listinfo/moses-support
>
>
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>
--
Raj Dabre.
Doctoral Student,
Graduate School of Informatics,
Kyoto University.
CSE MTech, IITB., 2011-2014
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20150707/388687cf/attachment.htm
------------------------------
_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support
End of Moses-support Digest, Vol 105, Issue 12
**********************************************
Subscribe to:
Post Comments (Atom)
0 Response to "Moses-support Digest, Vol 105, Issue 12"
Post a Comment