Moses-support Digest, Vol 121, Issue 26

Send Moses-support mailing list submissions to
moses-support@mit.edu

To subscribe or unsubscribe via the World Wide Web, visit
http://mailman.mit.edu/mailman/listinfo/moses-support
or, via email, send a message with subject or body 'help' to
moses-support-request@mit.edu

You can reach the person managing the list at
moses-support-owner@mit.edu

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Moses-support digest..."


Today's Topics:

1. Re: Character ngrams using KenLM (Nat Gillin)
2. Hyerarchial MOSES Model-Training (Guillem Torres Badia)
3. Re: Hyerarchial MOSES Model-Training (Hieu Hoang)


----------------------------------------------------------------------

Message: 1
Date: Thu, 10 Nov 2016 12:14:28 +0800
From: Nat Gillin <nat.gillin@gmail.com>
Subject: Re: [Moses-support] Character ngrams using KenLM
To: Kenneth Heafield <moses@kheafield.com>
Cc: moses-support@mit.edu
Message-ID:
<CAD2EOZgvVdT9h+ezTQ=C149HYtfye3NfqLBHQZPu3on4_POmFg@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Dear Kenneth and Moses community,

@Kenneth, Thank you for the tip!

Regards,
Nat

On Wed, Nov 9, 2016 at 4:46 PM, Kenneth Heafield <moses@kheafield.com>
wrote:

> No. Tokenizer and LM are separate tools. You can of course replace space
> with a token like <spc> or something.
>
> On November 9, 2016 6:04:07 AM GMT+00:00, Nat Gillin <nat.gillin@gmail.com>
> wrote:
>
>> Dear Moses community,
>>
>> Other than manually replacing space with an unused character and adding
>> spaces to each character before training a language model with KenLM. Is it
>> possible for KenLM to generate character ngrams and output in arpa format
>> without altering the input file?
>>
>> Regards,
>> Nat
>>
>> ------------------------------
>>
>> Moses-support mailing list
>> Moses-support@mit.edu
>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20161109/fe76ff69/attachment-0001.html

------------------------------

Message: 2
Date: Thu, 10 Nov 2016 11:56:13 +0100
From: Guillem Torres Badia <guitorba@inf.upv.es>
Subject: [Moses-support] Hyerarchial MOSES Model-Training
To: moses-support@mit.edu
Message-ID:
<20161110115613.Horde.U-pavlSyQz4p6ufX8itPfQk@webmail.upv.es>
Content-Type: text/plain; charset="utf-8"

Hello,

I am training a hierarchical language model with MOSES-chart. I would
like to know which are the training parameters I should consider and
which values should they take to give good results in terms of BLEU.

I attach you the script I'm using currently, just in case you need to
check some details.

Guillem
-------------- next part --------------
A non-text attachment was scrubbed...
Name: launch_es-en.sh
Type: application/x-sh
Size: 2619 bytes
Desc: not available
Url : http://mailman.mit.edu/mailman/private/moses-support/attachments/20161110/bec9f36b/attachment-0001.sh

------------------------------

Message: 3
Date: Thu, 10 Nov 2016 11:18:43 +0000
From: Hieu Hoang <hieuhoang@gmail.com>
Subject: Re: [Moses-support] Hyerarchial MOSES Model-Training
To: Guillem Torres Badia <guitorba@inf.upv.es>
Cc: moses-support <moses-support@mit.edu>
Message-ID:
<CAEKMkbgYnSpcnH_nQdP5gTxVXzG3X+Qqe39rO4=GLrF8Ybijtw@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

the default parameters should give you reasonable BLEU, compared to
phrase-based models.

I personally use the EMS to run my experiment, rather than a script. You
can see my EMS config file here:
http://www.statmt.org/moses/RELEASE-3.0/models/de-en/config.hiero.recase
The EMS will create scripts, like the script you attached. The scripts it
create are found here:
http://www.statmt.org/moses/RELEASE-3.0/models/de-en/steps/2/
More details of the EMS here:
http://www.statmt.org/moses/?n=FactoredTraining.EMS

Hieu Hoang
http://www.hoang.co.uk/hieu

On 10 November 2016 at 10:56, Guillem Torres Badia <guitorba@inf.upv.es>
wrote:

> Hello,
>
> I am training a hierarchical language model with MOSES-chart. I would like
> to know which are the training parameters I should consider and which
> values should they take to give good results in terms of BLEU.
>
> I attach you the script I'm using currently, just in case you need to
> check some details.
>
> Guillem
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20161110/56c9a214/attachment-0001.html

------------------------------

_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


End of Moses-support Digest, Vol 121, Issue 26
**********************************************

0 Response to "Moses-support Digest, Vol 121, Issue 26"

Post a Comment