Moses-support Digest, Vol 97, Issue 90

Send Moses-support mailing list submissions to
moses-support@mit.edu

To subscribe or unsubscribe via the World Wide Web, visit
http://mailman.mit.edu/mailman/listinfo/moses-support
or, via email, send a message with subject or body 'help' to
moses-support-request@mit.edu

You can reach the person managing the list at
moses-support-owner@mit.edu

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Moses-support digest..."


Today's Topics:

1. moses-chart binary missing? (Eric Baucom)
2. Re: moses-chart binary missing? (Hieu Hoang)
3. Re: METEOR: difference between ranking task and other tasks
(Michael Denkowski)
4. Re: METEOR: difference between ranking task and other tasks
(Marcin Junczys-Dowmunt)


----------------------------------------------------------------------

Message: 1
Date: Wed, 26 Nov 2014 16:15:46 -0500
From: Eric Baucom <eabaucom@umail.iu.edu>
Subject: [Moses-support] moses-chart binary missing?
To: moses-support@mit.edu
Message-ID:
<CALtA6LvMkP2i9fg7fOKOcs6=30h9k3FKbPEvgGgAtipreef7Ug@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

I am interested in experimenting with tree-to-tree translations, so I
recently installed Moses according to the guidelines here:
http://www.statmt.org/moses/?n=Development.GetStarted .

The installation completed successfully, and I am able to successfully
translate using the sample models as described in the same web page, with
the regular "moses" binary. However, my installation is missing the
"moses-chart" binary, which I believe is necessary to do any tree-to-tree
translation. Is this an addition step of the installation? I didn't see
any options about it in the documentation.

Thanks,
Eric Baucom
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20141126/6e4ed709/attachment-0001.htm

------------------------------

Message: 2
Date: Wed, 26 Nov 2014 21:20:25 +0000
From: Hieu Hoang <Hieu.Hoang@ed.ac.uk>
Subject: Re: [Moses-support] moses-chart binary missing?
To: Eric Baucom <eabaucom@umail.iu.edu>
Cc: moses-support <moses-support@mit.edu>
Message-ID:
<CAEKMkbg1TQnmMRYoXnpswaC3bq-6SqvOCma1EsZOuJjuzE=iBQ@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

There should be a softlink which points the file moses. Moses and
Moses_chart are being merged. Everything in the tutorial should still work
On 26 Nov 2014 21:16, "Eric Baucom" <eabaucom@umail.iu.edu> wrote:

> I am interested in experimenting with tree-to-tree translations, so I
> recently installed Moses according to the guidelines here:
> http://www.statmt.org/moses/?n=Development.GetStarted .
>
> The installation completed successfully, and I am able to successfully
> translate using the sample models as described in the same web page, with
> the regular "moses" binary. However, my installation is missing the
> "moses-chart" binary, which I believe is necessary to do any tree-to-tree
> translation. Is this an addition step of the installation? I didn't see
> any options about it in the documentation.
>
> Thanks,
> Eric Baucom
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20141126/d8bb6e7f/attachment-0001.htm

------------------------------

Message: 3
Date: Wed, 26 Nov 2014 16:31:29 -0500
From: Michael Denkowski <mdenkows@cs.cmu.edu>
Subject: Re: [Moses-support] METEOR: difference between ranking task
and other tasks
To: Marcin Junczys-Dowmunt <junczys@amu.edu.pl>
Cc: Moses Support <moses-support@mit.edu>
Message-ID:
<CA+-Geg+KJyWq=rS-86mD5Nm1B4PZwE8LMWyOd5UcHcA5yerJjg@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hi Marcin,

Meteor scores can vary widely across tasks due to the training data and
goal. The default ranking task tries to replicate WMT rankings, so the
absolute scores are not as important as the relative scores between
systems. The adequacy task tries to fit Meteor scores to numeric adequacy
judgements as linearly as possible. If you're looking to evaluate a system
in isolation to see if the translations are "good", you can simulate an
adequacy scale with the "adq" task. If you're comparing multiple systems,
you should get the most reliable ranking with the default "rank" task, but
the absolute scores will be less meaningful.

Best,
Michael

On Wed, Nov 26, 2014 at 9:34 AM, Marcin Junczys-Dowmunt <junczys@amu.edu.pl>
wrote:

> Hi,
>
> A question concerning METEOR, maybe someone has some experience. I am
> seeing huge differences between values for English with the defauly task
> "ranking" and any other of the tasks (e.g. "adq"). up to 30-40 points. Is
> this normal? In the literature I only ever see marginal differences of
> maybe 1 or 2 per cent but nothing like 35% vs. 65%. For the language
> independent setting is still get a score of 55%.
>
> See for instance:
> http://www.cs.cmu.edu/~alavie/METEOR/pdf/meteor-wmt11.pdf for the
> Urdu-English system for much smaller differences between "ranking" and
> "adq". I get the same discrepancies with meteor-1.3.jar and meteor-1.5.jar
>
> Cheers,
>
> Marcin
>
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20141126/8d5168d7/attachment-0001.htm

------------------------------

Message: 4
Date: Wed, 26 Nov 2014 22:46:55 +0100
From: Marcin Junczys-Dowmunt <junczys@amu.edu.pl>
Subject: Re: [Moses-support] METEOR: difference between ranking task
and other tasks
To: Michael Denkowski <mdenkows@cs.cmu.edu>
Cc: Moses Support <moses-support@mit.edu>
Message-ID: <54764A4F.8050000@amu.edu.pl>
Content-Type: text/plain; charset="utf-8"

Thanks, that's a very useful answer. I figured something similar, but I
was curious how come these huge differences between the methods are
never reported anywhere. Even in your paper they are just a few percent.

Also, could it be that the default METEOR setting is slighlty
overfitting to the WMT ranking task? I have the impression that for
systems that have generally higher BLEU scores than WMT systems (beyond
45% BLEU) METEOR seems to flatten out, barely changing values, while
BLEU differences are 4-6% absolute. This is not happening for BLEU
values around 20-30%, METEOR scales nearly linearly in that range,
following BLEU scores quite closely.
Cheers,
Marcin

W dniu 26.11.2014 o 22:31, Michael Denkowski pisze:
> Hi Marcin,
>
> Meteor scores can vary widely across tasks due to the training data
> and goal. The default ranking task tries to replicate WMT rankings,
> so the absolute scores are not as important as the relative scores
> between systems. The adequacy task tries to fit Meteor scores to
> numeric adequacy judgements as linearly as possible. If you're
> looking to evaluate a system in isolation to see if the translations
> are "good", you can simulate an adequacy scale with the "adq" task.
> If you're comparing multiple systems, you should get the most reliable
> ranking with the default "rank" task, but the absolute scores will be
> less meaningful.
>
> Best,
> Michael
>
> On Wed, Nov 26, 2014 at 9:34 AM, Marcin Junczys-Dowmunt
> <junczys@amu.edu.pl <mailto:junczys@amu.edu.pl>> wrote:
>
> Hi,
>
> A question concerning METEOR, maybe someone has some experience. I
> am seeing huge differences between values for English with the
> defauly task "ranking" and any other of the tasks (e.g. "adq"). up
> to 30-40 points. Is this normal? In the literature I only ever see
> marginal differences of maybe 1 or 2 per cent but nothing like 35%
> vs. 65%. For the language independent setting is still get a score
> of 55%.
>
> See for instance:
> http://www.cs.cmu.edu/~alavie/METEOR/pdf/meteor-wmt11.pdf
> <http://www.cs.cmu.edu/%7Ealavie/METEOR/pdf/meteor-wmt11.pdf> for
> the Urdu-English system for much smaller differences between
> "ranking" and "adq". I get the same discrepancies with
> meteor-1.3.jar and meteor-1.5.jar
>
> Cheers,
>
> Marcin
>
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu <mailto:Moses-support@mit.edu>
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20141126/8e02fa9f/attachment.htm

------------------------------

_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


End of Moses-support Digest, Vol 97, Issue 90
*********************************************

0 Response to "Moses-support Digest, Vol 97, Issue 90"

Post a Comment