Moses-support Digest, Vol 84, Issue 46

Send Moses-support mailing list submissions to
moses-support@mit.edu

To subscribe or unsubscribe via the World Wide Web, visit
http://mailman.mit.edu/mailman/listinfo/moses-support
or, via email, send a message with subject or body 'help' to
moses-support-request@mit.edu

You can reach the person managing the list at
moses-support-owner@mit.edu

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Moses-support digest..."


Today's Topics:

1. Re: How to use the NeuralLM (Hieu Hoang)
2. Re: incremental training (Miles Osborne)
3. Re: How to use the NeuralLM (Kenneth Heafield)
4. What is the best configuration for MOSES so far, decoding
speed-wise? (Roee Aharoni)
5. Re: What is the best configuration for MOSES so far, decoding
speed-wise? (Tom Hoar)


----------------------------------------------------------------------

Message: 1
Date: Wed, 30 Oct 2013 17:02:19 +0000
From: Hieu Hoang <Hieu.Hoang@ed.ac.uk>
Subject: Re: [Moses-support] How to use the NeuralLM
To: Kenneth Heafield <moses@kheafield.com>
Cc: moses-support <moses-support@mit.edu>
Message-ID:
<CAEKMkbhU=qKtn9y1oEYd-CiNHz8xj+VN23MWJ06dXV+e9OpdEg@mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

an assumption made in the moses integration is that there is no backoff
state. Therefore, the state info is just a hash to the full ngram. Do you
think there is a better state info?


On 29 October 2013 15:48, Kenneth Heafield <moses@kheafield.com> wrote:

> Hi,
>
> I also feel obliged to point out that nplm aka NeuralLM is
> currently
> not threadsafe, but I have an open thread with Ashish on this point.
>
> Kenneth
>
> On 10/29/13 04:36, Hieu Hoang wrote:
> > I just integrated Ashish Vaswani's Neural LM into the decoder, based on
> > code by Lane Schwartz.
> >
> > To compile Moses with the new LM, do
> > ./bjam .... --with-nplm=[path/to/nplm]
> >
> > I don't know if it works yet, or know how to use it. You should read
> > Ashish's paper and download his code
> > http://nlg.isi.edu/software/nplm/
> >
> > If you find out how to use it, please let us know!
> >
> >
> >
> > On 29 October 2013 02:28, Li Xiang <lixiang.ict@gmail.com
> > <mailto:lixiang.ict@gmail.com>> wrote:
> >
> > Hi,
> >
> > I just find the NeuralLM is added into the Moses. Could you give me
> > an example about the usage? Thanks.
> >
> > --
> > Xiang Li
> >
> > _______________________________________________
> > Moses-support mailing list
> > Moses-support@mit.edu <mailto:Moses-support@mit.edu>
> > http://mailman.mit.edu/mailman/listinfo/moses-support
> >
> >
> >
> >
> > --
> > Hieu Hoang
> > Research Associate
> > University of Edinburgh
> > http://www.hoang.co.uk/hieu
> >
> >
> >
> > _______________________________________________
> > Moses-support mailing list
> > Moses-support@mit.edu
> > http://mailman.mit.edu/mailman/listinfo/moses-support
> >
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>



--
Hieu Hoang
Research Associate
University of Edinburgh
http://www.hoang.co.uk/hieu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20131030/3ff48baf/attachment-0001.htm

------------------------------

Message: 2
Date: Wed, 30 Oct 2013 13:39:30 -0400
From: Miles Osborne <miles@inf.ed.ac.uk>
Subject: Re: [Moses-support] incremental training
To: "moses-support@mit.edu" <moses-support@mit.edu>
Message-ID:
<CAPRfTYpJvrqgdqZVxkn8N0a7ZCpwunUjVj3pHoBvQUDSKULYXA@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

Incremental training in Moses is based upon work we did a few years back:

http://homepages.inf.ed.ac.uk/miles/papers/naacl10b.pdf

Table 3 shows that there is essentially no quality difference between
incremental training and standard GIZA++ training. incremental (re)
training is a lot faster.

Miles

--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.


------------------------------

Message: 3
Date: Wed, 30 Oct 2013 10:46:51 -0700
From: Kenneth Heafield <moses@kheafield.com>
Subject: Re: [Moses-support] How to use the NeuralLM
To: Hieu Hoang <Hieu.Hoang@ed.ac.uk>
Cc: moses-support <moses-support@mit.edu>
Message-ID: <5271460B.4090609@kheafield.com>
Content-Type: text/plain; charset=ISO-8859-1

My understanding is that part of the point is examining the full n-gram.
There is no analogue to state minimization, but some recombination may
be licensed by the small vocabulary of the model. In terms of
efficiency, you might just want to carry the vocab ids instead of a hash.

I've made a treadsafe fork of their code at https://github.com/kpu/nplm
. The idea is that you make a copy of neuralLM for each thread. Ashish
Vaswani and David Chiang said they have their own threadsafe version;
waiting for them to release it.

Kenneth

On 10/30/13 10:02, Hieu Hoang wrote:
> an assumption made in the moses integration is that there is no backoff
> state. Therefore, the state info is just a hash to the full ngram. Do
> you think there is a better state info?
>
>
> On 29 October 2013 15:48, Kenneth Heafield <moses@kheafield.com
> <mailto:moses@kheafield.com>> wrote:
>
> Hi,
>
> I also feel obliged to point out that nplm aka NeuralLM is
> currently
> not threadsafe, but I have an open thread with Ashish on this point.
>
> Kenneth
>
> On 10/29/13 04:36, Hieu Hoang wrote:
> > I just integrated Ashish Vaswani's Neural LM into the decoder,
> based on
> > code by Lane Schwartz.
> >
> > To compile Moses with the new LM, do
> > ./bjam .... --with-nplm=[path/to/nplm]
> >
> > I don't know if it works yet, or know how to use it. You should read
> > Ashish's paper and download his code
> > http://nlg.isi.edu/software/nplm/
> >
> > If you find out how to use it, please let us know!
> >
> >
> >
> > On 29 October 2013 02:28, Li Xiang <lixiang.ict@gmail.com
> <mailto:lixiang.ict@gmail.com>
> > <mailto:lixiang.ict@gmail.com <mailto:lixiang.ict@gmail.com>>> wrote:
> >
> > Hi,
> >
> > I just find the NeuralLM is added into the Moses. Could you
> give me
> > an example about the usage? Thanks.
> >
> > --
> > Xiang Li
> >
> > _______________________________________________
> > Moses-support mailing list
> > Moses-support@mit.edu <mailto:Moses-support@mit.edu>
> <mailto:Moses-support@mit.edu <mailto:Moses-support@mit.edu>>
> > http://mailman.mit.edu/mailman/listinfo/moses-support
> >
> >
> >
> >
> > --
> > Hieu Hoang
> > Research Associate
> > University of Edinburgh
> > http://www.hoang.co.uk/hieu
> >
> >
> >
> > _______________________________________________
> > Moses-support mailing list
> > Moses-support@mit.edu <mailto:Moses-support@mit.edu>
> > http://mailman.mit.edu/mailman/listinfo/moses-support
> >
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu <mailto:Moses-support@mit.edu>
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>
>
>
> --
> Hieu Hoang
> Research Associate
> University of Edinburgh
> http://www.hoang.co.uk/hieu
>


------------------------------

Message: 4
Date: Thu, 31 Oct 2013 12:27:07 +0200
From: Roee Aharoni <roee.aharoni@gmail.com>
Subject: [Moses-support] What is the best configuration for MOSES so
far, decoding speed-wise?
To: moses-support@mit.edu
Message-ID: <0446DA83-795A-4B91-A0D3-A5B033EC2530@gmail.com>
Content-Type: text/plain; charset=us-ascii

Hi all!

We are looking into building an SMT system based on MOSES, and ready to put in large effort and resources. We already built a baseline system with the toolkit and got very positive early results.

Now, we would like to know what is the optimal setup (especially regarding speed/latency) recommended for a system, based on your past experience so far, in terms of:

- Operating system to use
- CPU and memory options
- Virtual vs. Physical servers
- Which LM to use
- Scale-out options available

Etc.

We would really appreciate any help, hopefully to become users and contributors of the project!

Best regards,
Roee


------------------------------

Message: 5
Date: Thu, 31 Oct 2013 19:01:52 +0700
From: Tom Hoar <tahoar@precisiontranslationtools.com>
Subject: Re: [Moses-support] What is the best configuration for MOSES
so far, decoding speed-wise?
To: moses-support@mit.edu
Message-ID: <527246B0.6040800@precisiontranslationtools.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hi Roee. You might want to look at the three configurations here that
use Moses RELEASE 1.0:

http://www.precisiontranslationtools.com/products/hardware-requirements-domt-desktop/

Tom




On 10/31/2013 05:27 PM, Roee Aharoni wrote:
> Hi all!
>
> We are looking into building an SMT system based on MOSES, and ready to put in large effort and resources. We already built a baseline system with the toolkit and got very positive early results.
>
> Now, we would like to know what is the optimal setup (especially regarding speed/latency) recommended for a system, based on your past experience so far, in terms of:
>
> - Operating system to use
> - CPU and memory options
> - Virtual vs. Physical servers
> - Which LM to use
> - Scale-out options available
>
> Etc.
>
> We would really appreciate any help, hopefully to become users and contributors of the project!
>
> Best regards,
> Roee
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support



------------------------------

_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


End of Moses-support Digest, Vol 84, Issue 46
*********************************************

0 Response to "Moses-support Digest, Vol 84, Issue 46"

Post a Comment