Send Moses-support mailing list submissions to
moses-support@mit.edu
To subscribe or unsubscribe via the World Wide Web, visit
http://mailman.mit.edu/mailman/listinfo/moses-support
or, via email, send a message with subject or body 'help' to
moses-support-request@mit.edu
You can reach the person managing the list at
moses-support-owner@mit.edu
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Moses-support digest..."
Today's Topics:
1. Re: Major bug found in Moses (Kenneth Heafield)
----------------------------------------------------------------------
Message: 1
Date: Wed, 17 Jun 2015 12:13:46 -0400
From: Kenneth Heafield <moses@kheafield.com>
Subject: Re: [Moses-support] Major bug found in Moses
To: moses-support@mit.edu
Message-ID: <55819CBA.6090705@kheafield.com>
Content-Type: text/plain; charset=windows-1252
I'll bite.
The moses.ini files ship with bogus feature weights. One is required to
tune the system to discover good weights for their system. You did not
tune. The results of an untuned system are meaningless.
So for example if the feature weights are all zeros, then the scores are
all zero. The system will arbitrarily pick some awful translation from
a large space of translations.
The filter looks at one feature p(target | source). So now you've
constrained the awful untuned model to a slightly better region of the
search space.
In other words, all you've done is a poor approximation to manually
setting the weight to 1.0 on p(target | source) and the rest to 0.
The problem isn't that you are running without a language model (though
we generally do not care what happens without one). The problem is that
you did not tune the feature weights.
Moreover, as Marcin is pointing out, I wouldn't necessarily expect
tuning to work without an LM.
On 06/17/15 11:56, Read, James C wrote:
> Actually the approximation I expect to be:
>
> p(e|f)=p(f|e)
>
> Why would you expect this to give poor results if the TM is well trained? Surely the results of my filtering experiments provve otherwise.
>
> James
>
> ________________________________________
> From: moses-support-bounces@mit.edu <moses-support-bounces@mit.edu> on behalf of Rico Sennrich <rico.sennrich@gmx.ch>
> Sent: Wednesday, June 17, 2015 5:32 PM
> To: moses-support@mit.edu
> Subject: Re: [Moses-support] Major bug found in Moses
>
> Read, James C <jcread@...> writes:
>
>> I have been unable to find a logical explanation for this behaviour other
> than to conclude that there must be some kind of bug in Moses which causes a
> TM only run of Moses to perform poorly in finding the most likely
> translations according to the TM when
>> there are less likely phrase pairs included in the race.
> I may have overlooked something, but you seem to have removed the language
> model from your config, and used default weights. your default model will
> thus (roughly) implement the following model:
>
> p(e|f) = p(e|f)*p(f|e)
>
> which is obviously wrong, and will give you poor results. This is not a bug
> in the code, but a poor choice of models and weights. Standard steps in SMT
> (like tuning the model weights on a development set, and including a
> language model) will give you the desired results.
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
------------------------------
_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support
End of Moses-support Digest, Vol 104, Issue 33
**********************************************
Subscribe to:
Post Comments (Atom)
0 Response to "Moses-support Digest, Vol 104, Issue 33"
Post a Comment