Send Moses-support mailing list submissions to
moses-support@mit.edu
To subscribe or unsubscribe via the World Wide Web, visit
http://mailman.mit.edu/mailman/listinfo/moses-support
or, via email, send a message with subject or body 'help' to
moses-support-request@mit.edu
You can reach the person managing the list at
moses-support-owner@mit.edu
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Moses-support digest..."
Today's Topics:
1. Re: Performance issue using Moses Server with Moses 3
(probably same as Oren's) (Barry Haddow)
2. Re: Performance issue using Moses Server with Moses 3
(probably same as Oren's) (Martin Baumg?rtner)
----------------------------------------------------------------------
Message: 1
Date: Fri, 24 Jul 2015 12:26:35 +0100
From: Barry Haddow <bhaddow@staffmail.ed.ac.uk>
Subject: Re: [Moses-support] Performance issue using Moses Server with
Moses 3 (probably same as Oren's)
To: Martin Baumg?rtner <martin.baumgaertner@star-group.net>,
moses-support@mit.edu
Message-ID: <55B220EB.8020902@staffmail.ed.ac.uk>
Content-Type: text/plain; charset="windows-1252"
Hi Martin
Thanks for the detailed information. It's a bit strange since
command-line Moses uses the same threadpool, and we always overload the
threadpool since the entire test set is read in and queued.
The server was refactored somewhat recently - which git revision are you
using?
In the case where Moses takes a long time, and cpu activity is low, it
could be either waiting on IO, or waiting on locks. If the former, I
don't know why it works fine for command-line Moses, and if the latter
then it's odd how it eventually frees itself.
Is it possible to run scenario 2, then attach a debugger whilst Moses is
in the low-CPU phase to see what it is doing? (You can do this in gdb
with "info threads")
cheers - Barry
On 24/07/15 12:07, Martin Baumg?rtner wrote:
> Hi,
>
> followed your discussion about mosesserver performance issue with much
> interest so far.
>
> We're having similar behaviour in our perfomance tests with a current
> github master clone. Both, mosesserver and complete engine run from
> same local machine, i.e. no NFS. Machine is virtualized CentOS 7 using
> Hyper-V:
>
> > lscpu
>
> Architecture: x86_64
> CPU op-mode(s): 32-bit, 64-bit
> Byte Order: Little Endian
> CPU(s): 8
> On-line CPU(s) list: 0-7
> Thread(s) per core: 1
> Core(s) per socket: 8
> Socket(s): 1
> NUMA node(s): 1
> Vendor ID: GenuineIntel
> CPU family: 6
> Model: 30
> Model name: Intel(R) Core(TM) i7 CPU 860 @ 2.80GHz
> Stepping: 5
> CPU MHz: 2667.859
> BogoMIPS: 5335.71
> Hypervisor vendor: Microsoft
> Virtualization type: full
> L1d cache: 32K
> L1i cache: 32K
> L2 cache: 256K
> L3 cache: 8192K
>
>
> Following experiments using an engine with 75000 segments for TM/LM
> (--minphr-memory, --minlexr-memory):
>
> 1.)
> server: --threads: 8
> client: shoots 8 threads => about 12 seconds, server shows full CPU
> workload => OK
>
> 2.)
> server: --threads: 8
> client: shoots 10 threads => about 85 seconds, server shows mostly low
> activity, full CPU workload only near end of process => NOT OK
>
> 3.)
> server: --threads: 16
> client: shoots 10 threads => about 12 seconds, server shows busy CPU
> workload => OK
>
> 4.)
> server: --threads: 16
> client: shoots 16 threads => about 11 seconds, server shows busy CPU
> workload => OK
>
> 5.)
> server: --threads: 16
> client: shoots 20 threads => about 40-60 seconds (depending), server
> shows mostly low activity, full CPU workload only near end of process
> => NOT OK
>
>
> We've seen a breakdown in performance always when the client threads
> exceed the number of threads given by the --threads param.
>
> Kind regards,
> Martin
>
> --
>
> *STAR Group* <http://www.star-group.net>
> <http://www.star-group.net/>
>
> *Martin Baumg?rtner*
>
> STAR Language Technology & Solutions GmbH
> Umberto-Nobile-Stra?e 19 | 71063 Sindelfingen | Germany
> Tel. +49 70 31-4 10 92-0 martin.baumgaertner@star-group.net
> <mailto:martin.baumgaertner@star-group.net>
> Fax +49 70 31-4 10 92-70 www.star-group.net <http://www.star-group.net/>
> Gesch?ftsf?hrer: Oliver Rau, Bernd Barth
> Handelsregister Stuttgart HRB 245654 | St.-Nr. 56098/11677
>
>
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20150724/08a28450/attachment-0001.htm
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 8030 bytes
Desc: not available
Url : http://mailman.mit.edu/mailman/private/moses-support/attachments/20150724/08a28450/attachment-0001.gif
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: not available
Url: http://mailman.mit.edu/mailman/private/moses-support/attachments/20150724/08a28450/attachment-0001.bat
------------------------------
Message: 2
Date: Fri, 24 Jul 2015 15:17:54 +0200
From: Martin Baumg?rtner <martin.baumgaertner@star-group.net>
Subject: Re: [Moses-support] Performance issue using Moses Server with
Moses 3 (probably same as Oren's)
To: Barry Haddow <bhaddow@staffmail.ed.ac.uk>, moses-support@mit.edu
Message-ID: <55B23B02.6040104@star-group.net>
Content-Type: text/plain; charset="windows-1252"
Hi Barry,
thanks for your quick reply!
We're currently testing on SHA e53ad4085942872f1c4ce75cb99afe66137e1e17
(master, from 2015-07-23). This version includes the fix for mosesserver
recently mentioned by Hieu in the performance thread.
Following my first intuition, I ran the critical experiments after
having modified mosesserver.cpp just by simply doubling the given
--threads value, but only for abyss server: .maxConn((unsigned
int)numThreads*2):
2.)
server: --threads: 8 (i.e. abyss: 16)
client: shoots 10 threads => about 11 seconds, server shows busy CPU
workload => OK
5.)
server: --threads: 16 (i.e. abyss: 32)
client: shoots 20 threads => about 11 seconds, server shows busy CPU
workload => OK
Helps. :-)
Best wishes,
Martin
Am 24.07.2015 um 13:26 schrieb Barry Haddow:
> Hi Martin
>
> Thanks for the detailed information. It's a bit strange since
> command-line Moses uses the same threadpool, and we always overload
> the threadpool since the entire test set is read in and queued.
>
> The server was refactored somewhat recently - which git revision are
> you using?
>
> In the case where Moses takes a long time, and cpu activity is low, it
> could be either waiting on IO, or waiting on locks. If the former, I
> don't know why it works fine for command-line Moses, and if the latter
> then it's odd how it eventually frees itself.
>
> Is it possible to run scenario 2, then attach a debugger whilst Moses
> is in the low-CPU phase to see what it is doing? (You can do this in
> gdb with "info threads")
>
> cheers - Barry
>
> On 24/07/15 12:07, Martin Baumg?rtner wrote:
>> Hi,
>>
>> followed your discussion about mosesserver performance issue with
>> much interest so far.
>>
>> We're having similar behaviour in our perfomance tests with a current
>> github master clone. Both, mosesserver and complete engine run from
>> same local machine, i.e. no NFS. Machine is virtualized CentOS 7
>> using Hyper-V:
>>
>> > lscpu
>>
>> Architecture: x86_64
>> CPU op-mode(s): 32-bit, 64-bit
>> Byte Order: Little Endian
>> CPU(s): 8
>> On-line CPU(s) list: 0-7
>> Thread(s) per core: 1
>> Core(s) per socket: 8
>> Socket(s): 1
>> NUMA node(s): 1
>> Vendor ID: GenuineIntel
>> CPU family: 6
>> Model: 30
>> Model name: Intel(R) Core(TM) i7 CPU 860 @ 2.80GHz
>> Stepping: 5
>> CPU MHz: 2667.859
>> BogoMIPS: 5335.71
>> Hypervisor vendor: Microsoft
>> Virtualization type: full
>> L1d cache: 32K
>> L1i cache: 32K
>> L2 cache: 256K
>> L3 cache: 8192K
>>
>>
>> Following experiments using an engine with 75000 segments for TM/LM
>> (--minphr-memory, --minlexr-memory):
>>
>> 1.)
>> server: --threads: 8
>> client: shoots 8 threads => about 12 seconds, server shows full CPU
>> workload => OK
>>
>> 2.)
>> server: --threads: 8
>> client: shoots 10 threads => about 85 seconds, server shows mostly
>> low activity, full CPU workload only near end of process => NOT OK
>>
>> 3.)
>> server: --threads: 16
>> client: shoots 10 threads => about 12 seconds, server shows busy CPU
>> workload => OK
>>
>> 4.)
>> server: --threads: 16
>> client: shoots 16 threads => about 11 seconds, server shows busy CPU
>> workload => OK
>>
>> 5.)
>> server: --threads: 16
>> client: shoots 20 threads => about 40-60 seconds (depending), server
>> shows mostly low activity, full CPU workload only near end of process
>> => NOT OK
>>
>>
>> We've seen a breakdown in performance always when the client threads
>> exceed the number of threads given by the --threads param.
>>
>> Kind regards,
>> Martin
>>
>> --
>>
>> *STAR Group* <http://www.star-group.net>
>> <http://www.star-group.net/>
>>
>> *Martin Baumg?rtner*
>>
>> STAR Language Technology & Solutions GmbH
>> Umberto-Nobile-Stra?e 19 | 71063 Sindelfingen | Germany
>> Tel. +49 70 31-4 10 92-0 martin.baumgaertner@star-group.net
>> <mailto:martin.baumgaertner@star-group.net>
>> Fax +49 70 31-4 10 92-70 www.star-group.net
>> <http://www.star-group.net/>
>> Gesch?ftsf?hrer: Oliver Rau, Bernd Barth
>> Handelsregister Stuttgart HRB 245654 | St.-Nr. 56098/11677
>>
>>
>>
>> _______________________________________________
>> Moses-support mailing list
>> Moses-support@mit.edu
>> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>
>
> The University of Edinburgh is a charitable body, registered in
> Scotland, with registration number SC005336.
--
*STAR Group* <http://www.star-group.net>
<http://www.star-group.net/>
*Martin Baumg?rtner*
STAR Language Technology & Solutions GmbH
Umberto-Nobile-Stra?e 19 | 71063 Sindelfingen | Germany
Tel. +49 70 31-4 10 92-0 martin.baumgaertner@star-group.net
<mailto:martin.baumgaertner@star-group.net>
Fax +49 70 31-4 10 92-70 www.star-group.net <http://www.star-group.net/>
Gesch?ftsf?hrer: Oliver Rau, Bernd Barth
Handelsregister Stuttgart HRB 245654 | St.-Nr. 56098/11677
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20150724/37ed2cfd/attachment.htm
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 8030 bytes
Desc: not available
Url : http://mailman.mit.edu/mailman/private/moses-support/attachments/20150724/37ed2cfd/attachment.gif
-------------- next part --------------
A non-text attachment was scrubbed...
Name: cdfaaejj.gif
Type: image/gif
Size: 8030 bytes
Desc: not available
Url : http://mailman.mit.edu/mailman/private/moses-support/attachments/20150724/37ed2cfd/attachment-0001.gif
------------------------------
_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support
End of Moses-support Digest, Vol 105, Issue 53
**********************************************
Subscribe to:
Post Comments (Atom)
0 Response to "Moses-support Digest, Vol 105, Issue 53"
Post a Comment