Send Moses-support mailing list submissions to
moses-support@mit.edu
To subscribe or unsubscribe via the World Wide Web, visit
http://mailman.mit.edu/mailman/listinfo/moses-support
or, via email, send a message with subject or body 'help' to
moses-support-request@mit.edu
You can reach the person managing the list at
moses-support-owner@mit.edu
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Moses-support digest..."
Today's Topics:
1. Re: Moses-support Digest, Vol 108, Issue 15
(Marcin Junczys-Dowmunt)
----------------------------------------------------------------------
Message: 1
Date: Tue, 6 Oct 2015 22:43:28 +0200
From: Marcin Junczys-Dowmunt <junczys@amu.edu.pl>
Subject: Re: [Moses-support] Moses-support Digest, Vol 108, Issue 15
To: moses-support@mit.edu
Message-ID: <56143270.6090901@amu.edu.pl>
Content-Type: text/plain; charset="windows-1252"
How about multi-threaded multi-processes then? For instance 4-8 threads
per process.
Best,
Marcin
W dniu 06.10.2015 o 22:32, Ventsisav Zhechev pisze:
> Hi Hieu,
>
> While I was at Autodesk, I used the multi-process approach (with one
> process per virtual server even) and was quite happy with how that
> worked in production. I find the main benefit of the multi-process
> approach comes in complex production environments where you might end
> up with several layers of load balancing. In that case, the load
> balancing of the MT part is simpler to manage (i.e. more predictable)
> when you have one-thread MT processes that you can manage individually
> (starting/stopping as necessary)?particularly handy when individual
> request to the MT infrastructure may contain anywhere between 1 and
> 10000 segments each. (That is, if you have an 8-thread moses running,
> any request under about 12 segments will be underutilizing the
> available resources.) Multi-threaded MT processes are in effect
> creating an extra level of load balancing that cannot be managed at
> runtime.
>
> Just my 2?.
>
>
> Cheers,
>
> Ventzi
>
> ???????
> *Dr. Ventsislav Zhechev*
> Computational Linguist, Certified ScrumMaster?
>
> _http://VentsislavZhechev.eu_
>
>
>> On Oct 6, 2015, at 1:17 PM, moses-support-request@mit.edu
>> <mailto:moses-support-request@mit.edu> wrote:
>>
>> Date: Tue, 6 Oct 2015 21:16:59 +0100
>> From: Hieu Hoang <hieuhoang@gmail.com <mailto:hieuhoang@gmail.com>>
>> Subject: Re: [Moses-support] Faster decoding with multiple moses
>> instances
>> To: Michael Denkowski <michael.j.denkowski@gmail.com
>> <mailto:michael.j.denkowski@gmail.com>>,Philipp Koehn
>> <phi@jhu.edu <mailto:phi@jhu.edu>>
>> Cc: Moses Support <moses-support@mit.edu <mailto:moses-support@mit.edu>>
>> Message-ID: <56142C3B.6040701@gmail.com
>> <mailto:56142C3B.6040701@gmail.com>>
>> Content-Type: text/plain; charset="windows-1252"
>>
>> I've just run some comparison between multithreaded decoder and the
>> multi_moses.py script. It's good stuff.
>>
>> It make me seriously wonder whether we should use abandon
>> multi-threading and go all out for the multi-process approach.
>>
>> There's some advantage to multi-thread - eg. where model files are
>> loaded into memory rather than memory map. But there's disadvantages too
>> - it more difficult to maintain and there's about a 10% overhead.
>>
>> What do people think?
>>
>> Phrase-based:
>>
>> 151015202530
>> 32real4m37.000sreal1m15.391sreal0m51.217sreal0m48.287s
>> real0m50.719sreal0m52.027sreal0m53.045s
>> Baseline (Compact pt)user4m21.544suser5m28.597suser6m38.227s
>> user8m0.975suser8m21.122suser8m3.195suser8m4.663s
>>
>> sys0m15.451ssys0m34.669ssys0m53.867ssys1m10.515ssys1m20.746s
>> sys1m24.368ssys1m23.677s
>>
>>
>>
>>
>>
>>
>>
>>
>> 344m49.474sreal1m17.867sreal0m43.096sreal0m31.999s0m26.497s
>> 0m26.296skilled
>> (32) + multi_moses4m33.580suser4m40.486suser4m56.749s
>> user5m6.692s5m43.845s7m34.617s
>>
>> 0m15.957ssys0m32.347ssys0m51.016ssys1m11.106s1m44.115s2m21.263s
>>
>>
>>
>>
>>
>>
>>
>>
>> 38real4m46.254sreal1m16.637sreal0m49.711sreal0m48.389s
>> real0m49.144sreal0m51.676sreal0m52.472s
>> Baseline (Probing pt)user4m30.596suser5m32.500suser6m23.706s
>> user7m40.791suser7m51.946suser7m52.892suser7m53.569s
>>
>> sys0m15.624ssys0m36.169ssys0m49.433ssys1m6.812ssys1m9.614s
>> sys1m13.108ssys1m12.644s
>>
>>
>>
>>
>>
>>
>>
>>
>> 39real4m43.882sreal1m17.849sreal0m34.245sreal0m31.318s
>> real0m28.054sreal0m24.120sreal0m22.520s
>> (38) + multi mosesuser4m29.212suser4m47.693suser5m5.750s
>> user5m33.573suser6m18.847suser7m19.642suser8m38.013s
>>
>> sys0m15.835ssys0m25.398ssys0m36.716ssys0m41.349ssys0m48.494s
>> sys1m0.843ssys1m13.215s
>>
>>
>> Hiero:
>> 3real5m33.011sreal1m28.935sreal0m59.470sreal1m0.315s
>> real0m55.619sreal0m57.347sreal0m59.191s1m2.786s
>> 6/10 baselineuser4m53.187suser6m23.521suser8m17.170s
>> user12m48.303suser14m45.954suser17m58.109suser20m22.891s21m13.605s
>>
>> sys0m39.696ssys0m51.519ssys1m3.788ssys1m22.125ssys1m58.718s
>> sys2m51.249ssys4m4.807s4m37.691s
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> 4
>> real1m27.215sreal0m40.495sreal0m36.206sreal0m28.623s
>> real0m26.631sreal0m25.817s0m25.401s
>> (3) + multi_moses
>> user5m4.819suser5m42.070suser5m35.132suser6m46.001s
>> user7m38.151suser9m6.500s10m32.739s
>>
>>
>> sys0m38.039ssys0m45.753ssys0m44.117ssys0m52.285ssys0m56.655s
>> sys1m6.749s1m16.935s
>>
>>
>> On 05/10/2015 16:05, Michael Denkowski wrote:
>>> Hi Philipp,
>>>
>>> Unfortunately I don't have a precise measurement. If anyone knows of
>>> a good way to benchmark a process tree with lots of memory mapping the
>>> same files, I would be glad to run it.
>>>
>>> --Michael
>>>
>>> On Mon, Oct 5, 2015 at 10:26 AM, Philipp Koehn <phi@jhu.edu
>>> <mailto:phi@jhu.edu>
>>> <mailto:phi@jhu.edu>> wrote:
>>>
>>> Hi,
>>>
>>> great - that will be very useful.
>>>
>>> Since you just ran the comparison - do you have any numbers on
>>> "still allowed everything to fit into memory", i.e., how much more
>>> memory is used by running parallel instances?
>>>
>>> -phi
>>>
>>> On Mon, Oct 5, 2015 at 10:15 AM, Michael Denkowski
>>> <michael.j.denkowski@gmail.com <mailto:michael.j.denkowski@gmail.com>
>>> <mailto:michael.j.denkowski@gmail.com>> wrote:
>>>
>>> Hi all,
>>>
>>> Like some other Moses users, I noticed diminishing returns
>>> from running Moses with several threads. To work around this,
>>> I added a script to run multiple single-threaded instances of
>>> moses instead of one multi-threaded instance. In practice,
>>> this sped things up by about 2.5x for 16 cpus and using memory
>>> mapped models still allowed everything to fit into memory.
>>>
>>> If anyone else is interested in using this, you can prefix a
>>> moses command with scripts/generic/multi_moses.py. To use
>>> multiple instances in mert-moses.pl <http://mert-moses.pl
>>> <http://mert-moses.pl/>>,
>>> specify --multi-moses and control the number of parallel
>>> instances with --decoder-flags='-threads N'.
>>>
>>> Below is a benchmark on WMT fr-en data (2M training sentences,
>>> 400M words mono, suffix array PT, compact reordering, 5-gram
>>> KenLM) testing default stack decoding vs cube pruning without
>>> and with the parallelization script (+multi):
>>>
>>> ---
>>> 1cpu sent/sec
>>> stack 1.04
>>> cube 2.10
>>> ---
>>> 16cpu sent/sec
>>> stack 7.63
>>> +multi 12.20
>>> cube 7.63
>>> +multi 18.18
>>> ---
>>>
>>> --Michael
>>>
>>> _______________________________________________
>>> Moses-support mailing list
>>> Moses-support@mit.edu
>>> <mailto:Moses-support@mit.edu><mailto:Moses-support@mit.edu>
>>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Moses-support mailing list
>>> Moses-support@mit.edu <mailto:Moses-support@mit.edu>
>>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>
>> --
>> Hieu Hoang
>> http://www.hoang.co.uk/hieu
>>
>> -------------- next part --------------
>> An HTML attachment was scrubbed...
>> URL:http://mailman.mit.edu/mailman/private/moses-support/attachments/20151006/fed717aa/attachment.html
>>
>> ------------------------------
>>
>> _______________________________________________
>> Moses-support mailing list
>> Moses-support@mit.edu <mailto:Moses-support@mit.edu>
>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>
>
>
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20151006/b482ee1f/attachment.html
------------------------------
_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support
End of Moses-support Digest, Vol 108, Issue 18
**********************************************
Subscribe to:
Post Comments (Atom)
0 Response to "Moses-support Digest, Vol 108, Issue 18"
Post a Comment