Send Moses-support mailing list submissions to
moses-support@mit.edu
To subscribe or unsubscribe via the World Wide Web, visit
http://mailman.mit.edu/mailman/listinfo/moses-support
or, via email, send a message with subject or body 'help' to
moses-support-request@mit.edu
You can reach the person managing the list at
moses-support-owner@mit.edu
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Moses-support digest..."
Today's Topics:
1. Re: about Tuning in moses (nadeem khan)
2. Re: about Tuning in moses (Tom Hoar)
----------------------------------------------------------------------
Message: 1
Date: Sun, 5 Jan 2014 17:47:32 -0800 (PST)
From: nadeem khan <nad_star06@yahoo.com>
Subject: Re: [Moses-support] about Tuning in moses
To: "moses-support@mit.edu" <moses-support@mit.edu>
Message-ID:
<1388972852.49622.YahooMailNeo@web162405.mail.bf1.yahoo.com>
Content-Type: text/plain; charset="iso-8859-1"
I am not getting this definition "Tuning?refers to the process of finding the optimal weights for this linear model, where optimal weights are those which maximise translation performance on a small set of parallel sentences (the?tuning set).?"
?As i asked before why do we need it as already doing training of the system, where these optimal weights actually used while decoding? can you please elobrate effect of it with some kind of example.
Why there are maximum 25 iterations in MERT? is there anykind of science behind it>? and all other questions remains same as in previous mail.
THANK YOU
On , nadeem khan <nad_star06@yahoo.com> wrote:
Hi;
I am not getting this definition "Tuning?refers to the process of finding the optimal weights for this linear model, where optimal weights are those which maximise translation performance on a small set of parallel sentences (the?tuning set).?"
?As i asked before why do we need it as already doing training of the system, where these optimal weights actually used while decoding? can you please elobrate effect of it with some kind of example.
Why there are maximum 25 iterations in MERT? is there anykind of science behind it>? and all other questions remains same as in previous mail.
THANK YOU
On Monday, January 6, 2014 4:16 AM, Philipp Koehn <pkoehn@inf.ed.ac.uk> wrote:
Hi,
these are good questions that should be easy to answer if you understand the purpose of tuning when building machine translation systems.
You can find some information here:
http://www.statmt.org/moses/?n=FactoredTraining.Tuning
-phi
On Fri, Jan 3, 2014 at 2:26 PM, nadeem khan <nad_star06@yahoo.com> wrote:
Hi all
>I have a few question about tuning step of moses SMT.
>1. Why we need tuning of the system ? as We can decode without it then why do we need it>?
>2. What is reason behind getting optimized weights and where these weights are being used while decoding???
>3. Why corpus is needed for tuning and why we cant use training datatset or testset for tunning of the system???
>
>
>
>
>THANK YOU
>_______________________________________________
>Moses-support mailing list
>Moses-support@mit.edu
>http://mailman.mit.edu/mailman/listinfo/moses-support
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20140105/30d3d9af/attachment-0001.htm
------------------------------
Message: 2
Date: Mon, 06 Jan 2014 09:39:06 +0700
From: Tom Hoar <tahoar@precisiontranslationtools.com>
Subject: Re: [Moses-support] about Tuning in moses
To: moses-support@mit.edu
Message-ID: <52CA174A.8080405@precisiontranslationtools.com>
Content-Type: text/plain; charset="iso-8859-1"
Yes, you can use the moses.ini file created by the train-model.perl
script without tuning. Why do you need to tune? Simply put, it improves
the translations. If you want to see how moses performs without tuning,
do the tuning and look at the output of the run1.out file in the tuning
"working-dir." The original moses.ini is normally the first
configuration used in tuning. Compared that to the optimized/tuned
version, which is the final run. There should be measurable improvement.
Why is this necessary? Because the "training of the system" (as you
referred to it) is not exactly what you seem to think.
1) The train-model.perl script "trains" a *translation model* from
your parallel corpus (phrase/reordering tables if you're using
phrase-based system, or hierarchical).
2) The language model tool (of your choice) "builds" (trains) a
*language model* from your monolingual corpus.
These two steps and their outputs are separate. They know nothing of
each other. Note, you might be confused by the train-model.perl script's
'--lm' parameter. This parameter only adds the language model
information to the moses.ini file in the script's step 9. Several users
on this list, including ourselves, bypass this parameter and edit the
moses.ini file later to point to a language model we want to use.
So, the separate translation model(s) and language model(s) must be
combined to create an SMT model. The "tuning" process defines that
combination. The moses binary translates your tuning set using the
combination and a set of weights as a unified SMT model. With each
iteration, the moses binary uses a different sets of weights. When the
tuning process finds the optimal weights, it stores them in a new
moses.ini file and stops. If you start with the same translation and
language models, and tune them with different tuning sets, you will get
different translation output. Therefore, the tuning set should be a
close representation of the translation work you expect your SMT model
to translate in production.
25 maximum iterations is only a default. You can set it lower or higher
as you like. In our experience, most systems trained on well-prepared
corpora typically complete tuning automatically in less than 10 runs.
The "science" behind how it stops "automatically" is described in the
documentation.
Tom
On 01/06/2014 08:47 AM, nadeem khan wrote:
>
>
> I am not getting this definition "Tuning refers to the process of
> finding the optimal weights for this linear model, where optimal
> weights are those which maximise translation performance on a small
> set of parallel sentences (the tuning set). "
>
> As i asked before why do we need it as already doing training of the
> system, where these optimal weights actually used while decoding? can
> you please elobrate effect of it with some kind of example.
> Why there are maximum 25 iterations in MERT? is there anykind of
> science behind it>? and all other questions remains same as in
> previous mail.
>
> THANK YOU
>
>
> On , nadeem khan <nad_star06@yahoo.com> wrote:
> Hi;
>
> I am not getting this definition "*Tuning* refers to the process of
> finding the optimal weights for this linear model, where optimal
> weights are those which maximise translation performance on a small
> set of parallel sentences (the *tuning set*). "
>
> As i asked before why do we need it as already doing training of the
> system, where these optimal weights actually used while decoding? can
> you please elobrate effect of it with some kind of example.
> Why there are maximum 25 iterations in MERT? is there anykind of
> science behind it>? and all other questions remains same as in
> previous mail.
>
> THANK YOU
>
>
>
>
> On Monday, January 6, 2014 4:16 AM, Philipp Koehn
> <pkoehn@inf.ed.ac.uk> wrote:
> Hi,
>
> these are good questions that should be easy to answer if you
> understand the purpose of tuning when building machine translation
> systems.
>
> You can find some information here:
> http://www.statmt.org/moses/?n=FactoredTraining.Tuning
>
> -phi
>
>
> On Fri, Jan 3, 2014 at 2:26 PM, nadeem khan <nad_star06@yahoo.com
> <mailto:nad_star06@yahoo.com>> wrote:
>
> Hi all
> I have a few question about tuning step of moses SMT.
> 1. Why we need tuning of the system ? as We can decode without it
> then why do we need it>?
> 2. What is reason behind getting optimized weights and where these
> weights are being used while decoding???
> 3. Why corpus is needed for tuning and why we cant use training
> datatset or testset for tunning of the system???
>
>
> THANK YOU
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu <mailto:Moses-support@mit.edu>
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>
>
>
>
>
>
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20140106/0cc0ec8c/attachment.htm
------------------------------
_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support
End of Moses-support Digest, Vol 87, Issue 12
*********************************************
Subscribe to:
Post Comments (Atom)
0 Response to "Moses-support Digest, Vol 87, Issue 12"
Post a Comment