Send Moses-support mailing list submissions to
moses-support@mit.edu
To subscribe or unsubscribe via the World Wide Web, visit
http://mailman.mit.edu/mailman/listinfo/moses-support
or, via email, send a message with subject or body 'help' to
moses-support-request@mit.edu
You can reach the person managing the list at
moses-support-owner@mit.edu
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Moses-support digest..."
Today's Topics:
1. run giza: died with signal 9, without coredump (Khetam sh)
2. Re: run giza: died with signal 9, without coredump (Hieu Hoang)
3. Moses on SGE clarification (Vincent Nguyen)
----------------------------------------------------------------------
Message: 1
Date: Wed, 28 Oct 2015 11:52:00 +0000
From: Khetam sh <khetam.alsharou@hotmail.com>
Subject: [Moses-support] run giza: died with signal 9, without
coredump
To: moses support new <moses-support-request@mit.edu>, moses owner
<moses-support-owner@mit.edu>, moses <moses-support@mit.edu>
Message-ID: <SNT149-W3CB8E297C9598E0B73F14E0210@phx.gbl>
Content-Type: text/plain; charset="windows-1256"
While running Moses, a problem with run giza step under training occurred. I opened the file
TRAINING_run-giza.1.STDERR
It shows that
ERROR: Execution of: /home/user/workspace/bin/training-tools/snt2cooc /home/user/workspace/experiment/test2/training/giza.1/en-ar.cooc /home/user/workspace/experiment/test2/training/prepared.1/ar.vcb /home/user/workspace/experiment/test2/training/prepared.1/en.vcb /home/user/workspace/experiment/test2/training/prepared.1/en-ar-int-train.snt
died with signal 9, without coredump
What does that mean? Will this affect running Moses. TYI, when I type jobs, it shows that the system is still running.
Cheers,
Khetam
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20151028/f58e0685/attachment-0001.html
------------------------------
Message: 2
Date: Wed, 28 Oct 2015 12:19:41 +0000
From: Hieu Hoang <hieuhoang@gmail.com>
Subject: Re: [Moses-support] run giza: died with signal 9, without
coredump
To: Khetam sh <khetam.alsharou@hotmail.com>, moses-support
<moses-support@mit.edu>
Message-ID:
<CAEKMkbgJMH76p=pVk-TYwc807rdmQ_mjdU+MY5OJyKz5bFb89w@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
you need to run the mgiza steps again. To do that you need to delete
steps/1/TRAINING_run-giza*
then restart the EMS with
nohup .../experiment.perl -continue=1 -exec &
NOT
nohup experiment.perl -config=config.basic -exec &
Hieu Hoang
http://www.hoang.co.uk/hieu
On 28 October 2015 at 12:16, Khetam sh <khetam.alsharou@hotmail.com> wrote:
> After I add this line to my EMS config file, do I need to rerun the system
> again?
>
> ------------------------------
> Date: Wed, 28 Oct 2015 12:05:35 +0000
> Subject: Re: run giza: died with signal 9, without coredump
> From: hieuhoang@gmail.com
> To: khetam.alsharou@hotmail.com
> CC: moses-support@mit.edu
>
>
> your computer doesn't have enough memory to run sntcooc, which is a
> component of mgiza/giza++.
>
> If you are using mgiza, add the following to your EMS config file
> training-options = "... -snt2cooc snt2cooc.pl"
>
> Hieu Hoang
> http://www.hoang.co.uk/hieu
>
> On 28 October 2015 at 11:52, Khetam sh <khetam.alsharou@hotmail.com>
> wrote:
>
> While running Moses, a problem with run giza step under training occurred.
> I opened the file
> TRAINING_run-giza.1.STDERR
> It shows that
> ERROR: Execution of: /home/user/workspace/bin/training-tools/snt2cooc
> /home/user/workspace/experiment/test2/training/giza.1/en-ar.cooc
> /home/user/workspace/experiment/test2/training/prepared.1/ar.vcb
> /home/user/workspace/experiment/test2/training/prepared.1/en.vcb
> /home/user/workspace/experiment/test2/training/prepared.1/en-ar-int-train.snt
>
> died with signal 9, without coredump
>
> What does that mean? Will this affect running Moses. TYI, when I type
> jobs, it shows that the system is still running.
>
> Cheers,
> Khetam
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20151028/368de5bd/attachment-0001.html
------------------------------
Message: 3
Date: Wed, 28 Oct 2015 15:20:17 +0100
From: Vincent Nguyen <vnguyen@neuf.fr>
Subject: [Moses-support] Moses on SGE clarification
To: moses-support <moses-support@mit.edu>
Message-ID: <5630D9A1.3010606@neuf.fr>
Content-Type: text/plain; charset=utf-8; format=flowed
Hi there,
I need some clarification before screwing up some files.
I just setup a SGE cluster with a Master + 2 Nodes.
to make it clear let say my cluster name is "default", my master
headnode is "master", my 2 other nodes are "node1" and "node2"
for EMS :
I opened the default experiment.machines file and I see :
cluster: townhill seville hermes lion seville sannox lutzow frontend
multicore-4: freddie
multicore-8: tyr thor odin crom
multicore-16: saxnot vali vili freyja bragi hoenir
multicore-24: syn hel skaol saga buri loki sif magni
multicore-32: gna snotra lofn thrud
townhill and others are what ? name machines / nodes ? name of several
clusters ?
should I just put "default" or "master node1 node2" ?
multicore-X: should I put machine names here
if my 3 machines are 8 cores each
multicore-8: master node1 node2
right ?
then in the config file for EMS:
#generic-parallelizer =
$moses-script-dir/ems/support/generic-parallelizer.perl
#generic-parallelizer =
$moses-script-dir/ems/support/generic-multicore-parallelizer.perl
which one should take if my nodes are multicore ? still the first one ?
### cluster settings (if run on a cluster machine)
# number of jobs to be submitted in parallel
#
#jobs = 10
should I count approx 1 job per core on the total cores of my 3 machines ?
# arguments to qsub when scheduling a job
#qsub-settings = ""
can this stay empty ?
# project for priviledges and usage accounting
#qsub-project = iccs_smt
standard value ?
# memory and time
#qsub-memory = 4
#qsub-hours = 48
4 what ? GB ?
### multi-core settings
# when the generic parallelizer is used, the number of cores
# specified here
cores = 4
is this ignored if generic-parallelizer.perl is chosen ?
is there a way to put more load on one specific node ?
Many thanks,
V.
------------------------------
_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support
End of Moses-support Digest, Vol 108, Issue 76
**********************************************
Subscribe to:
Post Comments (Atom)
0 Response to "Moses-support Digest, Vol 108, Issue 76"
Post a Comment