Moses-support Digest, Vol 122, Issue 36

Send Moses-support mailing list submissions to
moses-support@mit.edu

To subscribe or unsubscribe via the World Wide Web, visit
http://mailman.mit.edu/mailman/listinfo/moses-support
or, via email, send a message with subject or body 'help' to
moses-support-request@mit.edu

You can reach the person managing the list at
moses-support-owner@mit.edu

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Moses-support digest..."


Today's Topics:

1. Re: Moses-support Digest, Vol 122, Issue 29 (Shubham Khandelwal)
2. Re: Moses-support Digest, Vol 122, Issue 29 (Philipp Koehn)


----------------------------------------------------------------------

Message: 1
Date: Sat, 24 Dec 2016 02:00:45 +0530
From: Shubham Khandelwal <skhlnmiit@gmail.com>
Subject: Re: [Moses-support] Moses-support Digest, Vol 122, Issue 29
To: Mathias M?ller <mathias.mueller@uzh.ch>
Cc: moses-support <moses-support@mit.edu>
Message-ID:
<CAHweNTvj1YLfKJ_b=d2K0Ad3PAyPAR8CidY9CfRX4VQiFbdogA@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hello,

Currently, I have created one fr-en translation model (size of
phrase-table.minphr and reordering-table.minlexr are 13 GB and 6.6 GB
respectively) by following the tutorial of Moses baseline system on a big
dataset. I have also used Cube Pruning method as suggested by Thomas. Now,
I use mosesserver and getting response. Now it is taking little bit less
time to decode the input sentences. However, the decoding is still *not *in
real time. I have attached moses.ini for your reference.
To make it fast, I just found an infrastructure:
https://github.com/ufal/mtmonkey which makes decoding faster by distributed
way.
So, before trying this (mtmonkey) out, I would like to know that Is there
any other solution or way now by which I can get this decoding in real time
using Moses ? Is it possible on GPU ?

Looking forward for your response.

Thanking You.

Regards,
Shubham Khandelwal

On Fri, Dec 16, 2016 at 4:29 PM, Mathias M?ller <mathias.mueller@uzh.ch>
wrote:

> Hi Shubham
>
> You could start Moses in server mode:
>
> $ moses -f /path/to/moses.ini --server --server-port 12345 --server-log
> /path/to/log
>
> This will load the models, keep them in memory and the server will wait
> for client requests and serve them until you terminate the process.
> Translating is a bit different in this case, you have to send an XML-RPC
> request to the server.
>
> But first you'd have to make sure Moses is built with XML-RPC.
>
> Regards and good luck
> Mathias
> ?
>
> Mathias M?ller
> AND-2-20
> Institute of Computational Linguistics
> University of Zurich
> Switzerland
> +41 44 635 75 81
> mathias.mueller@uzh.ch
>
> On Fri, Dec 16, 2016 at 10:32 AM, Shubham Khandelwal <skhlnmiit@gmail.com>
> wrote:
>
>> Hey Thomas,
>>
>> Thanks for your reply.
>> Using Cube Pruning, the speed is littile bit high, but not that much. I
>> will try to play with these parameters.
>>
>> I have binary moses2 which supports it aswell but it is taking more time
>> to than moses. Can you please send/share somewhere your binary moses2 file
>> if possible ?
>>
>> Also, I do not wish to run this command ( ~/mosesdecoder/bin/moses
>> -f moses.ini -threads all) every time for every input. Is there any way in
>> Moses by which all models will load in memory for forever and I can just
>> pass a input and get output in real time without using this command again
>> and again.
>>
>> Looking forward for your response.
>>
>> Thanks again.
>>
>> On Fri, Dec 16, 2016 at 1:20 PM, Tomasz Gawryl <
>> tomasz.gawryl@skrivanek.pl> wrote:
>>
>>> Hi,
>>> If you want to speed up decoding time maybe you should consider changing
>>> searching algorithm. I'm also using compact phrase tables and after some
>>> test I realised that cube pruning gives almost exactly the same quality
>>> but
>>> is much faster. For example you can add something like this to your
>>> config
>>> file:
>>>
>>> # Cube Pruning
>>> [search-algorithm]
>>> 1
>>> [cube-pruning-pop-limit]
>>> 1000
>>> [stack]
>>> 50
>>>
>>> If your model allows you may also try moses2 binary which is faster than
>>> original.
>>>
>>> Regards,
>>> Thomas
>>>
>>> ----------------------------------------------------------------------
>>>
>>> Message: 1
>>> Date: Thu, 15 Dec 2016 19:12:01 +0530
>>> From: Shubham Khandelwal <skhlnmiit@gmail.com>
>>> Subject: Re: [Moses-support] Regarding Decoding Time
>>> To: Hieu Hoang <hieuhoang@gmail.com>
>>> Cc: moses-support <moses-support@mit.edu>
>>> Message-ID:
>>> <CAHweNTvYeALYrAfJDgDiH51t5_AHSPRV0KwLCABC2td27yoHmA@mail.gm
>>> ail.com>
>>> Content-Type: text/plain; charset="utf-8"
>>>
>>> Hello,
>>>
>>> Currently, I am using phrase-table.minphr , reordering-table.minlexr and
>>> language model (total size of these 3 are 6 GB). Now, I tried to decode
>>> on
>>> two different machines (8 core-16GB RAM *&* 4 core-40GB RAM) using them.
>>> So, During decoding of around 500 words, it took 90 seconds and 100
>>> seconds
>>> respectively on those machines. I am already using compact phrase and
>>> reordering table representations for faster decoding. Is there any other
>>> way
>>> to reduce this decoding time.
>>>
>>> Also, In Moses, Do we have distributed way of decoding on multiple
>>> machines
>>> ?
>>>
>>> Looking forward for your response.
>>>
>>> _______________________________________________
>>> Moses-support mailing list
>>> Moses-support@mit.edu
>>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>>
>>
>>
>>
>> --
>> Yours Sincerely,
>>
>> Shubham Khandelwal
>> Masters in Informatics (M2-MoSIG),
>> University Joseph Fourier-Grenoble INP,
>> Grenoble, France
>> Webpage: https://sites.google.com/site/skhandelwl21/
>>
>> _______________________________________________
>> Moses-support mailing list
>> Moses-support@mit.edu
>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>
>>
>


--
Yours Sincerely,

Shubham Khandelwal
Masters in Informatics (M2-MoSIG),
University Joseph Fourier-Grenoble INP,
Grenoble, France
Webpage: https://sites.google.com/site/skhandelwl21/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20161223/53a542d9/attachment-0001.html
-------------- next part --------------
A non-text attachment was scrubbed...
Name: moses.ini
Type: application/octet-stream
Size: 1187 bytes
Desc: not available
Url : http://mailman.mit.edu/mailman/private/moses-support/attachments/20161223/53a542d9/attachment-0001.obj

------------------------------

Message: 2
Date: Fri, 23 Dec 2016 18:28:41 -0500
From: Philipp Koehn <phi@jhu.edu>
Subject: Re: [Moses-support] Moses-support Digest, Vol 122, Issue 29
To: Shubham Khandelwal <skhlnmiit@gmail.com>
Cc: moses-support <moses-support@mit.edu>
Message-ID:
<CAAFADDDyg4hux4=i=HfZoDKUcY4RfQhfGF2oh2=+o6aU25bkqQ@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hi,

MT Monkey is neural machine translation and not Moses.

Moses does not run on a GPU, it uses only CPU.

When you state that speed is not "real time" what kind of speed are you
looking for?

The best way, as others in this thread have suggested, is to lower the beam
threshold and use the server mode for low latency and multiple cores for
higher throughput.

-phi

On Fri, Dec 23, 2016 at 3:30 PM, Shubham Khandelwal <skhlnmiit@gmail.com>
wrote:

> Hello,
>
> Currently, I have created one fr-en translation model (size of
> phrase-table.minphr and reordering-table.minlexr are 13 GB and 6.6 GB
> respectively) by following the tutorial of Moses baseline system on a big
> dataset. I have also used Cube Pruning method as suggested by Thomas.
> Now, I use mosesserver and getting response. Now it is taking little bit
> less time to decode the input sentences. However, the decoding is still *not
> *in real time. I have attached moses.ini for your reference.
> To make it fast, I just found an infrastructure: https://
> github.com/ufal/mtmonkey which makes decoding faster by distributed way.
> So, before trying this (mtmonkey) out, I would like to know that Is there
> any other solution or way now by which I can get this decoding in real time
> using Moses ? Is it possible on GPU ?
>
> Looking forward for your response.
>
> Thanking You.
>
> Regards,
> Shubham Khandelwal
>
> On Fri, Dec 16, 2016 at 4:29 PM, Mathias M?ller <mathias.mueller@uzh.ch>
> wrote:
>
>> Hi Shubham
>>
>> You could start Moses in server mode:
>>
>> $ moses -f /path/to/moses.ini --server --server-port 12345 --server-log
>> /path/to/log
>>
>> This will load the models, keep them in memory and the server will wait
>> for client requests and serve them until you terminate the process.
>> Translating is a bit different in this case, you have to send an XML-RPC
>> request to the server.
>>
>> But first you'd have to make sure Moses is built with XML-RPC.
>>
>> Regards and good luck
>> Mathias
>> ?
>>
>> Mathias M?ller
>> AND-2-20
>> Institute of Computational Linguistics
>> University of Zurich
>> Switzerland
>> +41 44 635 75 81 <+41%2044%20635%2075%2081>
>> mathias.mueller@uzh.ch
>>
>> On Fri, Dec 16, 2016 at 10:32 AM, Shubham Khandelwal <skhlnmiit@gmail.com
>> > wrote:
>>
>>> Hey Thomas,
>>>
>>> Thanks for your reply.
>>> Using Cube Pruning, the speed is littile bit high, but not that much. I
>>> will try to play with these parameters.
>>>
>>> I have binary moses2 which supports it aswell but it is taking more time
>>> to than moses. Can you please send/share somewhere your binary moses2 file
>>> if possible ?
>>>
>>> Also, I do not wish to run this command ( ~/mosesdecoder/bin/moses
>>> -f moses.ini -threads all) every time for every input. Is there any way in
>>> Moses by which all models will load in memory for forever and I can just
>>> pass a input and get output in real time without using this command again
>>> and again.
>>>
>>> Looking forward for your response.
>>>
>>> Thanks again.
>>>
>>> On Fri, Dec 16, 2016 at 1:20 PM, Tomasz Gawryl <
>>> tomasz.gawryl@skrivanek.pl> wrote:
>>>
>>>> Hi,
>>>> If you want to speed up decoding time maybe you should consider changing
>>>> searching algorithm. I'm also using compact phrase tables and after some
>>>> test I realised that cube pruning gives almost exactly the same quality
>>>> but
>>>> is much faster. For example you can add something like this to your
>>>> config
>>>> file:
>>>>
>>>> # Cube Pruning
>>>> [search-algorithm]
>>>> 1
>>>> [cube-pruning-pop-limit]
>>>> 1000
>>>> [stack]
>>>> 50
>>>>
>>>> If your model allows you may also try moses2 binary which is faster
>>>> than
>>>> original.
>>>>
>>>> Regards,
>>>> Thomas
>>>>
>>>> ----------------------------------------------------------------------
>>>>
>>>> Message: 1
>>>> Date: Thu, 15 Dec 2016 19:12:01 +0530
>>>> From: Shubham Khandelwal <skhlnmiit@gmail.com>
>>>> Subject: Re: [Moses-support] Regarding Decoding Time
>>>> To: Hieu Hoang <hieuhoang@gmail.com>
>>>> Cc: moses-support <moses-support@mit.edu>
>>>> Message-ID:
>>>> <CAHweNTvYeALYrAfJDgDiH51t5_AHSPRV0KwLCABC2td27yoHmA@mail.gm
>>>> ail.com>
>>>> Content-Type: text/plain; charset="utf-8"
>>>>
>>>> Hello,
>>>>
>>>> Currently, I am using phrase-table.minphr , reordering-table.minlexr and
>>>> language model (total size of these 3 are 6 GB). Now, I tried to decode
>>>> on
>>>> two different machines (8 core-16GB RAM *&* 4 core-40GB RAM) using
>>>> them.
>>>> So, During decoding of around 500 words, it took 90 seconds and 100
>>>> seconds
>>>> respectively on those machines. I am already using compact phrase and
>>>> reordering table representations for faster decoding. Is there any
>>>> other way
>>>> to reduce this decoding time.
>>>>
>>>> Also, In Moses, Do we have distributed way of decoding on multiple
>>>> machines
>>>> ?
>>>>
>>>> Looking forward for your response.
>>>>
>>>> _______________________________________________
>>>> Moses-support mailing list
>>>> Moses-support@mit.edu
>>>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>>>
>>>
>>>
>>>
>>> --
>>> Yours Sincerely,
>>>
>>> Shubham Khandelwal
>>> Masters in Informatics (M2-MoSIG),
>>> University Joseph Fourier-Grenoble INP,
>>> Grenoble, France
>>> Webpage: https://sites.google.com/site/skhandelwl21/
>>>
>>> _______________________________________________
>>> Moses-support mailing list
>>> Moses-support@mit.edu
>>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>>
>>>
>>
>
>
> --
> Yours Sincerely,
>
> Shubham Khandelwal
> Masters in Informatics (M2-MoSIG),
> University Joseph Fourier-Grenoble INP,
> Grenoble, France
> Webpage: https://sites.google.com/site/skhandelwl21/
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20161223/49879439/attachment.html

------------------------------

_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


End of Moses-support Digest, Vol 122, Issue 36
**********************************************

0 Response to "Moses-support Digest, Vol 122, Issue 36"

Post a Comment