Send Moses-support mailing list submissions to
moses-support@mit.edu
To subscribe or unsubscribe via the World Wide Web, visit
http://mailman.mit.edu/mailman/listinfo/moses-support
or, via email, send a message with subject or body 'help' to
moses-support-request@mit.edu
You can reach the person managing the list at
moses-support-owner@mit.edu
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Moses-support digest..."
Today's Topics:
1. Re: Fwd: Re: Major bug found in Moses (Read, James C)
2. Re: Major bug found in Moses (Marcin Junczys-Dowmunt)
3. Re: Major bug found in Moses (Read, James C)
----------------------------------------------------------------------
Message: 1
Date: Wed, 17 Jun 2015 14:10:46 +0000
From: "Read, James C" <jcread@essex.ac.uk>
Subject: Re: [Moses-support] Fwd: Re: Major bug found in Moses
To: Marcin Junczys-Dowmunt <junczys@amu.edu.pl>, Moses Support
<moses-support@mit.edu>
Cc: "Arnold, Doug" <doug@essex.ac.uk>
Message-ID:
<DB3PR06MB0713FB21B0C12EEE0F9A9C8C85A60@DB3PR06MB0713.eurprd06.prod.outlook.com>
Content-Type: text/plain; charset="iso-8859-1"
So can we agree that this is undesirable behaviour and therefore a bug?
James
________________________________
From: moses-support-bounces@mit.edu <moses-support-bounces@mit.edu> on behalf of Marcin Junczys-Dowmunt <junczys@amu.edu.pl>
Sent: Wednesday, June 17, 2015 5:04 PM
To: Moses Support
Subject: [Moses-support] Fwd: Re: Major bug found in Moses
As I said. With an unpruned phrase table and an decoder that just optmizes some unreasonble set of weights all bets are off, so if you get very low BLEU point there, it's not surprising. It's probably jumping around in a very weird search space. With a pruned phrase table you restrict the search space VERY strongly. Nearly everything that will be produced is a half-decent translation. So yes, I can imagine that would happen.
Marcin
W dniu 2015-06-17 15:56, Read, James C napisal(a):
You would expect an improvement of 37 BLEU points?
James
________________________________
From: Marcin Junczys-Dowmunt <junczys@amu.edu.pl>
Sent: Wednesday, June 17, 2015 4:32 PM
To: Read, James C
Cc: Moses-support@mit.edu; Arnold, Doug
Subject: Re: [Moses-support] Major bug found in Moses
Hi James,
there are many more factors involved than just probability, for instance word penalties, phrase penalities etc. To be able to validate your own claim you would need to set weights for all those non-probabilities to zero. Otherwise there is no hope that moses will produce anything similar to the most probable translation. And based on that there is no surprise that there may be different translations. A pruned phrase table will produce naturally less noise, so I would say the behaviour you describe is quite exactly what I would expect to happen.
Best,
Marcin
W dniu 2015-06-17 15:26, Read, James C napisal(a):
Hi all,
I tried unsuccessfully to publish experiments showing this bug in Moses behaviour. As a result I have lost interest in attempting to have my work published. Nonetheless I think you all should be aware of an anomaly in Moses' behaviour which I have thoroughly exposed and should be easy enough for you to reproduce.
As I understand it the TM logic of Moses should select the most likely translations according to the TM. I would therefore expect a run of Moses with no LM to find sentences which are the most likely or at least close to the most likely according to the TM.
To test this behaviour I performed two runs of Moses. One with an unfiltered phrase table the other with a filtered phrase table which left only the most likely phrase pair for each source language phrase. The results were truly startling. I observed huge differences in BLEU score. The filtered phrase tables produced much higher BLEU scores. The beam size used was the default width of 100. I would not have been surprised in the differences in BLEU scores where minimal but they were quite high.
I have been unable to find a logical explanation for this behaviour other than to conclude that there must be some kind of bug in Moses which causes a TM only run of Moses to perform poorly in finding the most likely translations according to the TM when there are less likely phrase pairs included in the race.
I hope this information will be useful to the Moses community and that the cause of the behaviour can be found and rectified.
James
_______________________________________________
Moses-support mailing list
Moses-support@mit.edu<mailto:Moses-support@mit.edu>
http://mailman.mit.edu/mailman/listinfo/moses-support
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20150617/2c932377/attachment-0001.htm
------------------------------
Message: 2
Date: Wed, 17 Jun 2015 16:12:47 +0200
From: Marcin Junczys-Dowmunt <junczys@amu.edu.pl>
Subject: Re: [Moses-support] Major bug found in Moses
To: "Read, James C" <jcread@essex.ac.uk>
Cc: moses-support@mit.edu, "Arnold, Doug" <doug@essex.ac.uk>
Message-ID: <b9a2dcbabe0363b8439bcb309ffda9be@amu.edu.pl>
Content-Type: text/plain; charset="utf-8"
Hi James
No, not at all. I would say that is expected behaviour. It's how search
spaces and optimization works. If anything these are methodological
mistakes on your side, sorry. You are doing weird thinds to the decoder
and then you are surprised to get weird results from it.
W dniu 2015-06-17 16:07, Read, James C napisa?(a):
> So, do we agree that this is undersirable behaviour and therefore a bug?
>
> James
>
> -------------------------
>
> FROM: Marcin Junczys-Dowmunt <junczys@amu.edu.pl>
> SENT: Wednesday, June 17, 2015 5:01 PM
> TO: Read, James C
> SUBJECT: Re: [Moses-support] Major bug found in Moses
>
> As I said. With an unpruned phrase table and an decoder that just optmizes some unreasonble set of weights all bets are off, so if you get very low BLEU point there, it's not surprising. It's probably jumping around in a very weird search space. With a pruned phrase table you restrict the search space VERY strongly. Nearly everything that will be produced is a half-decent translation. So yes, I can imagine that would happen.
>
> Marcin
>
> W dniu 2015-06-17 15:56, Read, James C napisa?(a):
>
> You would expect an improvement of 37 BLEU points?
>
> James
>
> -------------------------
>
> FROM: Marcin Junczys-Dowmunt <junczys@amu.edu.pl>
> SENT: Wednesday, June 17, 2015 4:32 PM
> TO: Read, James C
> CC: Moses-support@mit.edu; Arnold, Doug
> SUBJECT: Re: [Moses-support] Major bug found in Moses
>
> Hi James,
>
> there are many more factors involved than just probability, for instance word penalties, phrase penalities etc. To be able to validate your own claim you would need to set weights for all those non-probabilities to zero. Otherwise there is no hope that moses will produce anything similar to the most probable translation. And based on that there is no surprise that there may be different translations. A pruned phrase table will produce naturally less noise, so I would say the behaviour you describe is quite exactly what I would expect to happen.
>
> Best,
>
> Marcin
>
> W dniu 2015-06-17 15:26, Read, James C napisa?(a):
>
> Hi all,
>
> I tried unsuccessfully to publish experiments showing this bug in Moses behaviour. As a result I have lost interest in attempting to have my work published. Nonetheless I think you all should be aware of an anomaly in Moses' behaviour which I have thoroughly exposed and should be easy enough for you to reproduce.
>
> As I understand it the TM logic of Moses should select the most likely translations according to the TM. I would therefore expect a run of Moses with no LM to find sentences which are the most likely or at least close to the most likely according to the TM.
>
> To test this behaviour I performed two runs of Moses. One with an unfiltered phrase table the other with a filtered phrase table which left only the most likely phrase pair for each source language phrase. The results were truly startling. I observed huge differences in BLEU score. The filtered phrase tables produced much higher BLEU scores. The beam size used was the default width of 100. I would not have been surprised in the differences in BLEU scores where minimal but they were quite high.
>
> I have been unable to find a logical explanation for this behaviour other than to conclude that there must be some kind of bug in Moses which causes a TM only run of Moses to perform poorly in finding the most likely translations according to the TM when there are less likely phrase pairs included in the race.
>
> I hope this information will be useful to the Moses community and that the cause of the behaviour can be found and rectified.
>
> James
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support [1]
Links:
------
[1] http://mailman.mit.edu/mailman/listinfo/moses-support
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20150617/19b46dc0/attachment-0001.htm
------------------------------
Message: 3
Date: Wed, 17 Jun 2015 14:22:11 +0000
From: "Read, James C" <jcread@essex.ac.uk>
Subject: Re: [Moses-support] Major bug found in Moses
To: Marcin Junczys-Dowmunt <junczys@amu.edu.pl>
Cc: "moses-support@mit.edu" <moses-support@mit.edu>, "Arnold, Doug"
<doug@essex.ac.uk>
Message-ID:
<DB3PR06MB071389B8158A69EED9609E5E85A60@DB3PR06MB0713.eurprd06.prod.outlook.com>
Content-Type: text/plain; charset="iso-8859-2"
All I did was break the link to the language model and then perform filtering. How is that a methodoligical mistake? How else would one test the efficacy of the TM in isolation?
I remain convinced that this is undersirable behaviour and therefore a bug.
James
________________________________
From: Marcin Junczys-Dowmunt <junczys@amu.edu.pl>
Sent: Wednesday, June 17, 2015 5:12 PM
To: Read, James C
Cc: Arnold, Doug; moses-support@mit.edu
Subject: Re: [Moses-support] Major bug found in Moses
Hi James
No, not at all. I would say that is expected behaviour. It's how search spaces and optimization works. If anything these are methodological mistakes on your side, sorry. You are doing weird thinds to the decoder and then you are surprised to get weird results from it.
W dniu 2015-06-17 16:07, Read, James C napisa?(a):
So, do we agree that this is undersirable behaviour and therefore a bug?
James
________________________________
From: Marcin Junczys-Dowmunt <junczys@amu.edu.pl>
Sent: Wednesday, June 17, 2015 5:01 PM
To: Read, James C
Subject: Re: [Moses-support] Major bug found in Moses
As I said. With an unpruned phrase table and an decoder that just optmizes some unreasonble set of weights all bets are off, so if you get very low BLEU point there, it's not surprising. It's probably jumping around in a very weird search space. With a pruned phrase table you restrict the search space VERY strongly. Nearly everything that will be produced is a half-decent translation. So yes, I can imagine that would happen.
Marcin
W dniu 2015-06-17 15:56, Read, James C napisa?(a):
You would expect an improvement of 37 BLEU points?
James
________________________________
From: Marcin Junczys-Dowmunt <junczys@amu.edu.pl>
Sent: Wednesday, June 17, 2015 4:32 PM
To: Read, James C
Cc: Moses-support@mit.edu; Arnold, Doug
Subject: Re: [Moses-support] Major bug found in Moses
Hi James,
there are many more factors involved than just probability, for instance word penalties, phrase penalities etc. To be able to validate your own claim you would need to set weights for all those non-probabilities to zero. Otherwise there is no hope that moses will produce anything similar to the most probable translation. And based on that there is no surprise that there may be different translations. A pruned phrase table will produce naturally less noise, so I would say the behaviour you describe is quite exactly what I would expect to happen.
Best,
Marcin
W dniu 2015-06-17 15:26, Read, James C napisa?(a):
Hi all,
I tried unsuccessfully to publish experiments showing this bug in Moses behaviour. As a result I have lost interest in attempting to have my work published. Nonetheless I think you all should be aware of an anomaly in Moses' behaviour which I have thoroughly exposed and should be easy enough for you to reproduce.
As I understand it the TM logic of Moses should select the most likely translations according to the TM. I would therefore expect a run of Moses with no LM to find sentences which are the most likely or at least close to the most likely according to the TM.
To test this behaviour I performed two runs of Moses. One with an unfiltered phrase table the other with a filtered phrase table which left only the most likely phrase pair for each source language phrase. The results were truly startling. I observed huge differences in BLEU score. The filtered phrase tables produced much higher BLEU scores. The beam size used was the default width of 100. I would not have been surprised in the differences in BLEU scores where minimal but they were quite high.
I have been unable to find a logical explanation for this behaviour other than to conclude that there must be some kind of bug in Moses which causes a TM only run of Moses to perform poorly in finding the most likely translations according to the TM when there are less likely phrase pairs included in the race.
I hope this information will be useful to the Moses community and that the cause of the behaviour can be found and rectified.
James
_______________________________________________
Moses-support mailing list
Moses-support@mit.edu<mailto:Moses-support@mit.edu>
http://mailman.mit.edu/mailman/listinfo/moses-support
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20150617/83766cb1/attachment.htm
------------------------------
_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support
End of Moses-support Digest, Vol 104, Issue 29
**********************************************
Subscribe to:
Post Comments (Atom)
0 Response to "Moses-support Digest, Vol 104, Issue 29"
Post a Comment