Moses-support Digest, Vol 106, Issue 10

Send Moses-support mailing list submissions to
moses-support@mit.edu

To subscribe or unsubscribe via the World Wide Web, visit
http://mailman.mit.edu/mailman/listinfo/moses-support
or, via email, send a message with subject or body 'help' to
moses-support-request@mit.edu

You can reach the person managing the list at
moses-support-owner@mit.edu

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Moses-support digest..."


Today's Topics:

1. Re: EMS results - makes sense ? (Hieu Hoang)
2. Re: Performance issue using Moses Server with Moses 3
(probably same as Oren's) (Martin Baumg?rtner)


----------------------------------------------------------------------

Message: 1
Date: Wed, 5 Aug 2015 15:12:06 +0400
From: Hieu Hoang <hieuhoang@gmail.com>
Subject: Re: [Moses-support] EMS results - makes sense ?
To: Kenneth Heafield <moses@kheafield.com>
Cc: moses-support <moses-support@mit.edu>
Message-ID:
<CAEKMkbi4xPi4t63Jp_bOa9S9s0sudjaLPhiEtKjLYC4-Xo0v0w@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

quite right. If anyone wants to volunteer, the out-of-disk-space error
would most likely to be in the sorting.

For extract, the likeliest place is
scripts/generic/extract-parallel.perl
wherever you see the RunFork() method being called

Hieu Hoang
Researcher
New York University, Abu Dhabi
http://www.hoang.co.uk/hieu

On 5 August 2015 at 14:39, Kenneth Heafield <moses@kheafield.com> wrote:

> Looking for a perl volunteer to:
>
> 1. Always run commands under set -e -o pipefail conditions so errors are
> likely to be reported in the return code.
>
> 2. Actually check the return code and die on failure.
>
> It shouldn't be guesswork when one runs out of disk space.
>
> On 08/05/15 08:43, Hieu Hoang wrote:
> > are you sure it didn't run out of disk space again? check in the
> > TRAINING_extract.*.STDERR file for messages.
> >
> > Also, because extract and scoring it is run in parallel, the error
> > messages sometimes overwrite each other so you don't get clear messages.
> > you have to use your intuition
> >
> >
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20150805/a575801c/attachment-0001.htm

------------------------------

Message: 2
Date: Wed, 05 Aug 2015 14:15:07 +0200
From: Martin Baumg?rtner <martin.baumgaertner@star-group.net>
Subject: Re: [Moses-support] Performance issue using Moses Server with
Moses 3 (probably same as Oren's)
To: Oren <mooshified@gmail.com>, Barry Haddow
<bhaddow@staffmail.ed.ac.uk>
Cc: "moses-support@mit.edu" <moses-support@mit.edu>
Message-ID: <55C1FE4B.3090505@star-group.net>
Content-Type: text/plain; charset="utf-8"

Hi Oren,

we temporarily fixed this issue with the following quick hack for Abyss
server's constructor call:

xmlrpc_c::serverAbyss myAbyssServer(
xmlrpc_c::serverAbyss::constrOpt()
.registryP(&myRegistry)
.portNumber(port) // TCP port on which to listen
.logFileName(logfile)
.allowOrigin("*")
.maxConn((unsigned int)numThreads*4) // *4 (performance issue,
inofficial quick hack)
);

I'm also looking forward to the official fix, i.e. a configurable value
for abyss connections ...

Kind regards,
Martin


Am 04.08.2015 um 09:08 schrieb Oren:
> Hi Barry and Martin,
>
> Has this issue been fixed in the source code? Should I take thr
> current master branch and compile it myself to avoid this issue?
>
> Thanks.
>
> On Friday, July 24, 2015, Barry Haddow <bhaddow@staffmail.ed.ac.uk
> <mailto:bhaddow@staffmail.ed.ac.uk>> wrote:
>
> Hi Martin
>
> So it looks like it was the abyss connection limit that was
> causing the problem? I'm not sure why this should be, either it
> should queue the jobs up or discard them.
>
> Probably Moses server should allow users to configure the number
> of abyss connections directly rather than tying it to the number
> of Moses threads.
>
> cheers - Barry
>
> On 24/07/15 14:17, Martin Baumg?rtner wrote:
>> Hi Barry,
>>
>> thanks for your quick reply!
>>
>> We're currently testing on SHA
>> e53ad4085942872f1c4ce75cb99afe66137e1e17 (master, from
>> 2015-07-23). This version includes the fix for mosesserver
>> recently mentioned by Hieu in the performance thread.
>>
>> Following my first intuition, I ran the critical experiments
>> after having modified mosesserver.cpp just by simply doubling the
>> given --threads value, but only for abyss server:
>> .maxConn((unsigned int)numThreads*2):
>>
>> 2.)
>> server: --threads: 8 (i.e. abyss: 16)
>> client: shoots 10 threads => about 11 seconds, server shows busy
>> CPU workload => OK
>>
>> 5.)
>> server: --threads: 16 (i.e. abyss: 32)
>> client: shoots 20 threads => about 11 seconds, server shows busy
>> CPU workload => OK
>>
>> Helps. :-)
>>
>> Best wishes,
>> Martin
>>
>> Am 24.07.2015 um 13:26 schrieb Barry Haddow:
>>> Hi Martin
>>>
>>> Thanks for the detailed information. It's a bit strange since
>>> command-line Moses uses the same threadpool, and we always
>>> overload the threadpool since the entire test set is read in and
>>> queued.
>>>
>>> The server was refactored somewhat recently - which git revision
>>> are you using?
>>>
>>> In the case where Moses takes a long time, and cpu activity is
>>> low, it could be either waiting on IO, or waiting on locks. If
>>> the former, I don't know why it works fine for command-line
>>> Moses, and if the latter then it's odd how it eventually frees
>>> itself.
>>>
>>> Is it possible to run scenario 2, then attach a debugger whilst
>>> Moses is in the low-CPU phase to see what it is doing? (You can
>>> do this in gdb with "info threads")
>>>
>>> cheers - Barry
>>>
>>> On 24/07/15 12:07, Martin Baumg?rtner wrote:
>>>> Hi,
>>>>
>>>> followed your discussion about mosesserver performance issue
>>>> with much interest so far.
>>>>
>>>> We're having similar behaviour in our perfomance tests with a
>>>> current github master clone. Both, mosesserver and complete
>>>> engine run from same local machine, i.e. no NFS. Machine is
>>>> virtualized CentOS 7 using Hyper-V:
>>>>
>>>> > lscpu
>>>>
>>>> Architecture: x86_64
>>>> CPU op-mode(s): 32-bit, 64-bit
>>>> Byte Order: Little Endian
>>>> CPU(s): 8
>>>> On-line CPU(s) list: 0-7
>>>> Thread(s) per core: 1
>>>> Core(s) per socket: 8
>>>> Socket(s): 1
>>>> NUMA node(s): 1
>>>> Vendor ID: GenuineIntel
>>>> CPU family: 6
>>>> Model: 30
>>>> Model name: Intel(R) Core(TM) i7 CPU 860 @
>>>> 2.80GHz
>>>> Stepping: 5
>>>> CPU MHz: 2667.859
>>>> BogoMIPS: 5335.71
>>>> Hypervisor vendor: Microsoft
>>>> Virtualization type: full
>>>> L1d cache: 32K
>>>> L1i cache: 32K
>>>> L2 cache: 256K
>>>> L3 cache: 8192K
>>>>
>>>>
>>>> Following experiments using an engine with 75000 segments for
>>>> TM/LM (--minphr-memory, --minlexr-memory):
>>>>
>>>> 1.)
>>>> server: --threads: 8
>>>> client: shoots 8 threads => about 12 seconds, server shows full
>>>> CPU workload => OK
>>>>
>>>> 2.)
>>>> server: --threads: 8
>>>> client: shoots 10 threads => about 85 seconds, server shows
>>>> mostly low activity, full CPU workload only near end of process
>>>> => NOT OK
>>>>
>>>> 3.)
>>>> server: --threads: 16
>>>> client: shoots 10 threads => about 12 seconds, server shows
>>>> busy CPU workload => OK
>>>>
>>>> 4.)
>>>> server: --threads: 16
>>>> client: shoots 16 threads => about 11 seconds, server shows
>>>> busy CPU workload => OK
>>>>
>>>> 5.)
>>>> server: --threads: 16
>>>> client: shoots 20 threads => about 40-60 seconds (depending),
>>>> server shows mostly low activity, full CPU workload only near
>>>> end of process => NOT OK
>>>>
>>>>
>>>> We've seen a breakdown in performance always when the client
>>>> threads exceed the number of threads given by the --threads param.
>>>>
>>>> Kind regards,
>>>> Martin
>>>>
>>>> --
>>>>
>>>> *STAR Group* <http://www.star-group.net>
>>>> <http://www.star-group.net/>
>>>>
>>>> *Martin Baumg?rtner*
>>>>
>>>> STAR Language Technology & Solutions GmbH
>>>> Umberto-Nobile-Stra?e 19 | 71063 Sindelfingen | Germany
>>>> Tel. +49 70 31-4 10 92-0 martin.baumgaertner@star-group.net
>>>> <javascript:_e(%7B%7D,'cvml','martin.baumgaertner@star-group.net');>
>>>>
>>>> Fax +49 70 31-4 10 92-70 www.star-group.net
>>>> <http://www.star-group.net/>
>>>> Gesch?ftsf?hrer: Oliver Rau, Bernd Barth
>>>> Handelsregister Stuttgart HRB 245654 | St.-Nr. 56098/11677
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Moses-support mailing list
>>>> Moses-support@mit.edu <javascript:_e(%7B%7D,'cvml','Moses-support@mit.edu');>
>>>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>>
>>>
>>>
>>> The University of Edinburgh is a charitable body, registered in
>>> Scotland, with registration number SC005336.
>>
>> --
>>
>> *STAR Group* <http://www.star-group.net>
>> <http://www.star-group.net/>
>>
>> *Martin Baumg?rtner*
>>
>> STAR Language Technology & Solutions GmbH
>> Umberto-Nobile-Stra?e 19 | 71063 Sindelfingen | Germany
>> Tel. +49 70 31-4 10 92-0 martin.baumgaertner@star-group.net
>> <javascript:_e(%7B%7D,'cvml','martin.baumgaertner@star-group.net');>
>> Fax +49 70 31-4 10 92-70 www.star-group.net
>> <http://www.star-group.net/>
>> Gesch?ftsf?hrer: Oliver Rau, Bernd Barth
>> Handelsregister Stuttgart HRB 245654 | St.-Nr. 56098/11677
>>
>

--

*STAR Group* <http://www.star-group.net>
<http://www.star-group.net/>

*Martin Baumg?rtner*

STAR Language Technology & Solutions GmbH
Umberto-Nobile-Stra?e 19 | 71063 Sindelfingen | Germany
Tel. +49 70 31-4 10 92-0 martin.baumgaertner@star-group.net
<mailto:martin.baumgaertner@star-group.net>
Fax +49 70 31-4 10 92-70 www.star-group.net <http://www.star-group.net/>
Gesch?ftsf?hrer: Oliver Rau, Bernd Barth
Handelsregister Stuttgart HRB 245654 | St.-Nr. 56098/11677

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20150805/2b92f9e7/attachment.htm
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 8030 bytes
Desc: not available
Url : http://mailman.mit.edu/mailman/private/moses-support/attachments/20150805/2b92f9e7/attachment.gif
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 8030 bytes
Desc: not available
Url : http://mailman.mit.edu/mailman/private/moses-support/attachments/20150805/2b92f9e7/attachment-0001.gif
-------------- next part --------------
A non-text attachment was scrubbed...
Name: eedaibda.gif
Type: image/gif
Size: 8030 bytes
Desc: not available
Url : http://mailman.mit.edu/mailman/private/moses-support/attachments/20150805/2b92f9e7/attachment-0002.gif

------------------------------

_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


End of Moses-support Digest, Vol 106, Issue 10
**********************************************

0 Response to "Moses-support Digest, Vol 106, Issue 10"

Post a Comment