Send Moses-support mailing list submissions to
moses-support@mit.edu
To subscribe or unsubscribe via the World Wide Web, visit
http://mailman.mit.edu/mailman/listinfo/moses-support
or, via email, send a message with subject or body 'help' to
moses-support-request@mit.edu
You can reach the person managing the list at
moses-support-owner@mit.edu
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Moses-support digest..."
Today's Topics:
1. Re: Faster decoding with multiple moses instances (Hieu Hoang)
----------------------------------------------------------------------
Message: 1
Date: Fri, 9 Oct 2015 17:14:12 +0100
From: Hieu Hoang <hieuhoang@gmail.com>
Subject: Re: [Moses-support] Faster decoding with multiple moses
instances
To: moses-support@mit.edu, Marcin Junczys-Dowmunt <junczys@amu.edu.pl>
Message-ID: <5617E7D4.2020102@gmail.com>
Content-Type: text/plain; charset="utf-8"
I appeared to have screwed up. THe unblockpt branch is large number of
threads. It downside is that it appears to use more memory so
multi_moses.py bring down the server @ high threadcount. Can't win them all
1 5 10 15 20 25 30 35
49 real4m56.474s real1m17.770s real0m50.482s real0m49.970s
real0m50.851s real0m52.411s real0m54.263s 0m55.137s
Baseline (master) user4m39.099s user5m39.706s user6m32.275s
user7m54.693s user8m7.420s user8m7.606s user8m26.099s 8m8.707s
sys0m17.379s sys0m35.081s sys0m55.350s sys1m13.207s sys1m21.048s
sys1m25.325s sys1m26.464s 1m28.651s
50 real4m52.220s real1m16.839s real0m45.847s real0m38.332s
real0m36.764s real0m36.254s real0m36.254s 0m36.833s
(49) + unblockpt user4m34.703s user5m38.984s user6m14.616s
user7m14.220s user8m45.198s user9m49.285s user9m49.285s 11m51.531s
sys0m17.484s sys0m34.341s sys0m57.122s sys1m34.292s sys2m19.347s
sys3m34.444s sys3m34.444s 4m55.236s
51
real1m16.387s real0m41.680s real0m38.793s real0m31.237s Crashed
Crashed Crashed
(50) + multi_moses
user5m6.564s user5m21.844s user5m44.855s user6m21.015s
sys0m40.458s sys0m57.749s sys1m16.392s sys1m44.173s
52
real1m32.930s real0m49.833s real0m49.833s real0m28.860s real0m30.364s
(49) + multi_moses
user5m2.480s user5m14.156s user5m14.156s user6m22.374s user6m40.412s
sys0m35.557s sys0m53.235s sys0m53.235s sys1m41.948s sys2m14.619s
53 real4m36.515s real1m13.842s real0m44.441s real0m36.498s
real0m34.639s real0m33.218s real0m33.003s 0m33.482s
(50) + probing user4m20.862s user5m20.037s user6m0.768s
user6m56.545s user8m21.316s user9m20.490s user10m22.638s 10m50.360s
sys0m15.712s sys0m35.746s sys0m53.254s sys1m19.331s sys1m54.006s
sys2m40.239s sys3m43.040s 3m59.816s
On 08/10/2015 21:00, Marcin Junczys-Dowmunt wrote:
> I have a branch, "unblockpt", those locks are gone and caches are
> thread-local. Hieu claims there is still not speed up.
>
> W dniu 08.10.2015 o 21:56, Kenneth Heafield pisze:
>> Good point. I now blame this code from
>> moses/TranslationModel/CompactPT/TargetPhraseCollectionCache.h
>>
>> Looks like a case for a concurrent fixed-size hash table. Failing that,
>> banded locks instead of a single lock? Namely an array of hash tables,
>> each of which is independently locked.
>>
>> /** retrieve translations for source phrase from persistent cache **/
>> void Cache(const Phrase &sourcePhrase, TargetPhraseVectorPtr tpv,
>> size_t bitsLeft = 0, size_t maxRank = 0) {
>> #ifdef WITH_THREADS
>> boost::mutex::scoped_lock lock(m_mutex);
>>
Subscribe to:
Post Comments (Atom)
0 Response to "Moses-support Digest, Vol 108, Issue 30"
Post a Comment