Send Moses-support mailing list submissions to
moses-support@mit.edu
To subscribe or unsubscribe via the World Wide Web, visit
http://mailman.mit.edu/mailman/listinfo/moses-support
or, via email, send a message with subject or body 'help' to
moses-support-request@mit.edu
You can reach the person managing the list at
moses-support-owner@mit.edu
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Moses-support digest..."
Today's Topics:
1. Re: IRSTLM installation (Matthias Huck)
2. Re: kbmira died with SIGABRT when tuning (Dingyuan Wang)
3. Re: Skip OOV when computing Language Model score
(LUONG NGOC Quang)
----------------------------------------------------------------------
Message: 1
Date: Mon, 18 Jan 2016 15:17:04 +0000
From: Matthias Huck <mhuck@inf.ed.ac.uk>
Subject: Re: [Moses-support] IRSTLM installation
To: Ouafa Benterki <obenterki@gmail.com>
Cc: moses-support@mit.edu
Message-ID: <1453130224.27582.107.camel@portedgar>
Content-Type: text/plain; charset="UTF-8"
Hi,
Have you tried to use an absolute path?
Cheers,
Matthias
On Mon, 2016-01-18 at 02:52 +0100, Ouafa Benterki wrote:
> Hello,
>
> I installed IRSTLM but when i used the command
> ./bjam --with-irstlm=/path to irstlm/ the installation failed
> can you advise
>
> Best
--
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
------------------------------
Message: 2
Date: Mon, 18 Jan 2016 23:32:29 +0800
From: Dingyuan Wang <abcdoyle888@gmail.com>
Subject: Re: [Moses-support] kbmira died with SIGABRT when tuning
To: Barry Haddow <bhaddow@inf.ed.ac.uk>, Hieu Hoang
<hieuhoang@gmail.com>
Cc: moses-support <moses-support@mit.edu>
Message-ID: <569D058D.3090800@gmail.com>
Content-Type: text/plain; charset=utf-8
Hi Barry,
I've checked all the models and corpora with the script, without finding
any encoding problem.
I also find that all such errors in nbest list occurs only in the
feature list (3 different samples), without affecting translation
result. Therefore, the phrase table or training corpus may not be the
problem.
? 2016?01?18? 23:04, Barry Haddow ??:
> Hi Dingyuan
>
> Are these encoding errors present in your phrase table? Are they present
> in your training corpus? Since they appear in the word translation
> features, and you are using a shortlist, are they in the shortlist files
> in the model directory? (These have names with "topn" in them afaik).
>
> File-system errors are unlikely, and for the most part Moses treats text
> as byte strings so encoding errors usually trace back to the source text.
>
> cheers - Barry
>
> On 18/01/16 14:56, Dingyuan Wang wrote:
>> Hi Barry,
>>
>> "The ones starting with the "@"" are due to corrupted bytes in the nbest
>> list.
>>
>> This kind of corruption occurs from time to time. I wonder if it comes
>> from memory errors or filesystem failure or some kind of
>> pointer/encoding problem in moses.
>>
>> I've written a script to find such corrupted lines:
>>
>> https://gist.github.com/gumblex/0d9d0848b435e4f9818f
>>
>> ? 2016?01?18? 20:42, Barry Haddow ??:
>>> Hi Dingyuan
>>>
>>> The extractor expects feature names to contain an underscore (not sure
>>> exactly why) but some of yours don't, and Moses skips them, interpreting
>>> their values as extra dense features.
>>>
>>> The attached screenshot shows my view of the offending names. The ones
>>> starting with the "@" are the problem. So it does look like the nbest
>>> list is corrupted. Can you run the decoder on just that sentence, to
>>> create an uncompressed version of the nbest list?
>>>
>>> cheers - Barry
>>>
>>> On 18/01/16 12:02, Dingyuan Wang wrote:
>>>> Hi Barry,
>>>>
>>>> Attached is the zgrep result.
>>>> I found that in the middle of line 61 a few bytes are corrupted. Is
>>>> that
>>>> a moses problem or my memory has a problem?
>>>>
>>>> I also checked other files using iconv, they are all OK in UTF-8.
>>>>
>>>> ? 2016?01?18? 19:32, Barry Haddow ??:
>>>>> Hi Dingyuan
>>>>>
>>>>> Yes, that's very possible. The error could be in extracting
>>>>> features.dat
>>>>> from the nbest list. Are you able to post the nbest list? Or at least
>>>>> the entries for sentence 16?
>>>>>
>>>>> Run something like
>>>>>
>>>>> zgrep "^16 " tuning/tmp.1/run7.best100.out.gz
>>>>>
>>>>> cheers - Barry
>>>>>
>>>>> On 18/01/16 11:24, Dingyuan Wang wrote:
>>>>>> Hi Barry,
>>>>>>
>>>>>> I have rerun the ems after the first email, and then posted the
>>>>>> recent
>>>>>> results, so the line changed.
>>>>>>
>>>>>> I just use the latest code, and the EMS script. Pretty much are
>>>>>> default
>>>>>> settings. The EMS setting is:
>>>>>>
>>>>>> sparse-features = "target-word-insertion top 50, source-word-deletion
>>>>>> top 50, word-translation top 50 50, phrase-length"
>>>>>>
>>>>>> I suspect there is something unexpected in the extractor.
>>>>>>
>>>>>>
>>>>>> ? 2016?01?18? 19:03, Barry Haddow ??:
>>>>>>> Hi Dingyuan
>>>>>>>
>>>>>>> In fact it is not the sparse features nor the Asian characters that
>>>>>>> are
>>>>>>> the problem. The offending line has 17 dense features, yet your
>>>>>>> model
>>>>>>> has 14 dense features.
>>>>>>>
>>>>>>> The string "1 1 1" appears directly after the language model
>>>>>>> feature in
>>>>>>> line 1694, in your attachment, adding the extra 3 features. Note
>>>>>>> that
>>>>>>> this is not the line you mentioned in your earlier email.
>>>>>>>
>>>>>>> I have no idea why there are extra features. Have you made
>>>>>>> changes to
>>>>>>> any of the core Moses features?
>>>>>>>
>>>>>>> best wishes
>>>>>>> Barry
>>>>>>>
>>>>>>> The offending line:
>>>>>>> what(): Error in line "-5.44027 0 0 -5.34901 0 0 0 -224.872 1 1
>>>>>>> 1 -39
>>>>>>> 18 -26.2331 -40.6736 -44.3698 -82.5072 WT_?~?=3 WT_?~?=1
>>>>>>> WT_?~?=1
>>>>>>> WT_?~?=1 WT_?~?=1 PL_s3=5 PL_3,2=2 PL_3,3=3 PL_2,3=4 PL_t3=7
>>>>>>> PL_s1=5
>>>>>>> PL_1,2=2 PL_1,1=3 PL_t1=4 PL_2,2=3 PL_t2=7 PL_s2=8 PL_2,1=1 WT_
>>>>>>> ?~?=1
>>>>>>> WT_?~?=1 WT_?~?=1 WT_?~?=1 WT_?~?=1 WT_?~?=1 WT_?~?=1
>>>>>>> WT_?~
>>>>>>> ?=1 WT_??~?=1 WT_??~?=1 WT_?~?=1 WT_?~?=1 WT_?~??=1
>>>>>>> WT_?~
>>>>>>> ?=1 WT_?~?=1 WT_?~??=1 WT_?~??=1 WT_?~?=1 WT_?~?=1 WT_
>>>>>>> ?~?
>>>>>>> ?=1 WT_?~?=1 WT_?~??=1 WT_?~?=1 WT_?~??=1 WT_?~??=1
>>>>>>> WT_?
>>>>>>> ?~??=1 WT_?~??=1 WT_?~?=1 WT_?~?=1 WT_?~?=1 WT_?~?
>>>>>>> ?=1 WT_
>>>>>>> ?~??=1 WT_??~??=1 " of ...
>>>>>>>
>>>>>>>
>>>>>>> On 18/01/16 10:37, Dingyuan Wang wrote:
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> I've attached that. The line number is 1694.
>>>>>>>>
>>>>>>>> ? 2016?01?18? 16:43, Barry Haddow ??:
>>>>>>>>> Hi Dingyuan
>>>>>>>>>
>>>>>>>>> Is it possible to attach the features.dat file that is causing the
>>>>>>>>> error? Almost certainly Moses is failing to parse the line
>>>>>>>>> because of
>>>>>>>>> the Asian characters in the feature names,
>>>>>>>>>
>>>>>>>>> cheers - Barry
>>>>>>>>>
>>>>>>>>> On 16/01/16 15:58, Dingyuan Wang wrote:
>>>>>>>>>> I ran
>>>>>>>>>>
>>>>>>>>>> ~/software/moses/bin/kbmira -J 75 --dense-init run7.dense
>>>>>>>>>> --sparse-init
>>>>>>>>>> run7.sparse-weights --ffile run1.features.dat --ffile
>>>>>>>>>> run2.features.dat
>>>>>>>>>> --ffile run3.features.dat --ffile run4.features.dat --ffile
>>>>>>>>>> run5.features.dat --ffile run6.features.dat --ffile
>>>>>>>>>> run7.features.dat
>>>>>>>>>> --scfile run1.scores.dat --scfile run2.scores.dat --scfile
>>>>>>>>>> run3.scores.dat --scfile run4.scores.dat --scfile run5.scores.dat
>>>>>>>>>> --scfile run6.scores.dat --scfile run7.scores.dat -o
>>>>>>>>>> /tmp/mert.out
>>>>>>>>>>
>>>>>>>>>> in the tuning/tmp.1 directory, which will certainly replicate the
>>>>>>>>>> error.
>>>>>>>>>>
>>>>>>>>>> ? 2016?01?16? 23:42, Hieu Hoang ??:
>>>>>>>>>>> The mert script prints out every command it runs. You should be
>>>>>>>>>>> able to
>>>>>>>>>>> replicate the error by running the last command
>>>>>>>>>>>
>>>>>>>>>>> On 16 Jan 2016 14:18, "Dingyuan Wang" <abcdoyle888@gmail.com
>>>>>>>>>>> <mailto:abcdoyle888@gmail.com>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Sorry, but I can't reliably replicate the same problem
>>>>>>>>>>> when
>>>>>>>>>>> running
>>>>>>>>>>> TUNING_tune.1 alone. There is no character '_' in
>>>>>>>>>>> the test
>>>>>>>>>>> set
>>>>>>>>>>> or top50
>>>>>>>>>>> list.
>>>>>>>>>>>
>>>>>>>>>>> I'm using sparse-features = "target-word-insertion
>>>>>>>>>>> top 50,
>>>>>>>>>>> source-word-deletion top 50, word-translation top 50
>>>>>>>>>>> 50,
>>>>>>>>>>> phrase-length"
>>>>>>>>>>>
>>>>>>>>>>> I've attached some related files from EMS and the EMS
>>>>>>>>>>> config.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> https://mega.nz/#!xs0SFKxL!M_RTBp1JGX24-b4xlYYLP-bLXKiC_Sl-p96x55avAB4
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> ? 2016?01?16? 02:45, Hieu Hoang ??:
>>>>>>>>>>> > could you make your model files available for
>>>>>>>>>>> download so I
>>>>>>>>>>> can
>>>>>>>>>>> > replicate this problem.
>>>>>>>>>>> >
>>>>>>>>>>> > it seems like you're using a feature function with
>>>>>>>>>>> sparse
>>>>>>>>>>> scores. I
>>>>>>>>>>> > think the character '_' must be escaped.
>>>>>>>>>>> >
>>>>>>>>>>> >
>>>>>>>>>>> > On 12/01/16 04:00, Dingyuan Wang wrote:
>>>>>>>>>>> >> Hi all,
>>>>>>>>>>> >>
>>>>>>>>>>> >> I'm using EMS for doing experiments. Every time the
>>>>>>>>>>> kbmira
>>>>>>>>>>> died with
>>>>>>>>>>> >> SIGABRT when turning on one direction, while tuning
>>>>>>>>>>> on the
>>>>>>>>>>> opposite
>>>>>>>>>>> >> direction (same config and test set) was successful.
>>>>>>>>>>> >>
>>>>>>>>>>> >> The mert.log (stderr) shows follows:
>>>>>>>>>>> >>
>>>>>>>>>>> >>
>>>>>>>>>>> >> kbmira with c=0.01 decay=0.999 no_shuffle=0
>>>>>>>>>>> >> Initialising random seed from system clock
>>>>>>>>>>> >> Found 15323 initial sparse features
>>>>>>>>>>> >> ....terminate called after throwing an instance of
>>>>>>>>>>> >> 'MosesTuning::FileFormatException'
>>>>>>>>>>> >> what(): Error in line "-4.51933 0 0 -6.09733
>>>>>>>>>>> 0 0 0
>>>>>>>>>>> -121.556 2
>>>>>>>>>>> -20 12
>>>>>>>>>>> >> -31.6201 -38.5211 -26.5112 -60.6166 WT_?~?=2
>>>>>>>>>>> WT_?~?=1
>>>>>>>>>>> PL_s1=4
>>>>>>>>>>> >> PL_s3=1 PL_3,3=1 PL_2,2=3 PL_1,2=1 PL_2,1=3 PL_t1=6
>>>>>>>>>>> PL_t2=4
>>>>>>>>>>> PL_t3=2
>>>>>>>>>>> >> PL_2,3=1 PL_s2=7 PL_1,1=3 WT_?~??=1 WT_?~??=1
>>>>>>>>>>> WT_?~
>>>>>>>>>>> ?=1
>>>>>>>>>>> WT_?~?
>>>>>>>>>>> >> ?=1 WT_?~?=1 WT_?~?=2 WT_?~?=1 WT_?~?=1
>>>>>>>>>>> WT_?~
>>>>>>>>>>> ??=1
>>>>>>>>>>> WT_
>>>>>>>>>>> ?~?=1
>>>>>>>>>>> >> WT_?~??=1 WT_?~?=1 WT_?~??=1 WT_?~??=1
>>>>>>>>>>> WT_?~?
>>>>>>>>>>> ?=1 WT_?~
>>>>>>>>>>> >> ?=1 WT_?~??=1 " of run7.features.dat
>>>>>>>>>>> >> Aborted
>>>>>>>>>>> >>
>>>>>>>>>>> >>
>>>>>>>>>>> >> I think since run7.scores.dat is generated by some
>>>>>>>>>>> scripts, I
>>>>>>>>>>> wouldn't
>>>>>>>>>>> >> be responsible for making the bad format. Last
>>>>>>>>>>> time it
>>>>>>>>>>> also
>>>>>>>>>>> died, I
>>>>>>>>>>> >> removed the likely offending line in the test
>>>>>>>>>>> set, but
>>>>>>>>>>> this time
>>>>>>>>>>> another
>>>>>>>>>>> >> line appears.
>>>>>>>>>>> >>
>>>>>>>>>>> >> --
>>>>>>>>>>> >> Dingyuan Wang
>>>>>>>>>>> >> _______________________________________________
>>>>>>>>>>> >> Moses-support mailing list
>>>>>>>>>>> >> Moses-support@mit.edu <mailto:Moses-support@mit.edu>
>>>>>>>>>>> >>
>>>>>>>>>>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>>>>>>>>>> >
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Dingyuan Wang (gumblex)
>>>>>>>>>>>
>>>
>
>
--
Dingyuan Wang
------------------------------
Message: 3
Date: Mon, 18 Jan 2016 16:55:37 +0100
From: LUONG NGOC Quang <quangngocluong@gmail.com>
Subject: Re: [Moses-support] Skip OOV when computing Language Model
score
To: Ergun Bicici <ergun.bicici@dfki.de>
Cc: moses-support <moses-support@mit.edu>
Message-ID:
<CAD5_8Y=jb5qshCzccvcAfi_k1oqKAWkGJxsZQbu8a8WtP-HVkw@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Dear All,
Thank you all of you for your contribution.
Actually I am using a LM trained over data not identical to the target side
of the phrase table (with much more limited vocabulary, for my own
purpose), so I don't think that -drop-unknown option would help.
As Jie also emphasized, my objective is to jump to one word futher when
encoutering <unk>, and so on. That would match "house <unk> in" to "house
in" in the LM without doing anything else (e.g. backoff). And it is not
exactly what the setting oov-feature=1 can do!
I also observed that -skipoovs option puts zero probability for all ngrams
containing OOV and therefore does not count them in the overall sentence LM
score.
So far I am more convinced that the code modification is the possible way
to accomplish my goal, although it is not straighforward for me at present.
Best,
Quang
On Fri, Jan 15, 2016 at 4:41 PM, Ergun Bicici <ergun.bicici@dfki.de> wrote:
>
> No comment.
>
>
>
> *Best Regards,*
> Ergun
>
> Ergun Bi?ici
> DFKI Projektb?ro Berlin
>
>
> On Fri, Jan 15, 2016 at 4:20 PM, Jie Jiang <mail.jie.jiang@gmail.com>
> wrote:
>
>> Hi Ergun:
>>
>> I think the -skipoovs option would just drop all the n-gram scores that
>> has OOV in it, rather than using a skip-ngram LM model.
>>
>> Easy way to test it is just run it with that option to calculate log prob
>> on a sentence with OOV, and it should result in a rather high score.
>>
>> Please correct me if I'm wrong...
>>
>> 2016-01-15 14:07 GMT+00:00 Ergun Bicici <ergun.bicici@dfki.de>:
>>
>>>
>>> Dear Jie,
>>>
>>> There may be some option from SRILM:
>>> - http://www.speech.sri.com/pipermail/srilm-user/2013q2/001509.html
>>> - http://www.speech.sri.com/projects/srilm/manpages/ngram.1.html:
>>> * -skipoovs*
>>> Instruct the LM to skip over contexts that contain out-of-vocabulary
>>> words, instead of using a backoff strategy in these cases.
>>>
>>> ?if it is not ?there maybe for a reason...
>>>
>>> Bing appears fast to index this thread:
>>> ?http://comments.gmane.org/gmane.comp.nlp.moses.user/14570?
>>>
>>>
>>> *Best Regards,*
>>> Ergun
>>>
>>> Ergun Bi?ici
>>> DFKI Projektb?ro Berlin
>>>
>>>
>>> On Fri, Jan 15, 2016 at 2:37 PM, Jie Jiang <mail.jie.jiang@gmail.com>
>>> wrote:
>>>
>>>> Hi Ergun:
>>>>
>>>> The original request in Quang's post was:
>>>>
>>>> *For instance, with the n-gram: "the <unk> house <unk> in", I would
>>>> like the decoder to assign it the probability of the phrase: "the house in"
>>>> (existing in the LM).*
>>>>
>>>> so each time there is a <unk> when calculating the LM score, you need
>>>> to look another word further.
>>>>
>>>> I believe that it cannot be achieved on current LM tools without
>>>> modifying the source code, which has already been clarified by Kenneth.
>>>>
>>>>
>>>> 2016-01-15 13:20 GMT+00:00 Ergun Bicici <ergun.bicici@dfki.de>:
>>>>
>>>>>
>>>>> Dear Kenneth,
>>>>>
>>>>> In the Moses manual, -drop-unknown switch is mentioned:
>>>>>
>>>>> 4.7.2
>>>>> Handling Unknown Words
>>>>> Unknown words are copied verbatim to the output. They are also scored
>>>>> by the language
>>>>> model, and may be placed out of order. Alternatively, you may want to
>>>>> drop unknown words.
>>>>> To do so add the switch -drop-unknown.
>>>>>
>>>>> ?Alternatively, you can write a script that replaces all OOV tokens?
>>>>> with some OOV-token-identifier such as <unk> before sending for
>>>>> translation.
>>>>>
>>>>>
>>>>> *Best Regards,*
>>>>> Ergun
>>>>>
>>>>> Ergun Bi?ici
>>>>> DFKI Projektb?ro Berlin
>>>>>
>>>>>
>>>>> On Fri, Jan 15, 2016 at 12:22 AM, Kenneth Heafield <
>>>>> moses@kheafield.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I think oov-feature=1 just activates the OOV count feature
>>>>>> while
>>>>>> leaving LM score unchanged. So it would still include p(<unk> | in).
>>>>>>
>>>>>> One might try setting the OOV feature weight to -weight_LM *
>>>>>> weird_moses_internal_constant * log p(<unk>) in an attempt to cancel
>>>>>> out
>>>>>> the log p(<unk>) terms. However that won't work either because:
>>>>>>
>>>>>> 1) It will still charge backoff penalties, b(the)b(house) in the
>>>>>> example.
>>>>>>
>>>>>> 2) The context will be lost each time so it's p(house) not p(house |
>>>>>> the).
>>>>>>
>>>>>> If the <unk>s follow a pattern, such as appearing every other word,
>>>>>> one
>>>>>> could insert them into the ARPA file though that would waste memory.
>>>>>>
>>>>>> I don't think there's any way to accomplish exactly what OP asked for
>>>>>> without coding (though it wouldn't be that hard once one understands
>>>>>> how
>>>>>> the LM infrastructure works).
>>>>>>
>>>>>> Kenneth
>>>>>>
>>>>>> On 01/14/2016 11:07 PM, Philipp Koehn wrote:
>>>>>> > Hi,
>>>>>> >
>>>>>> > You may get the behavior you want by adding
>>>>>> > "oov-feature=1"
>>>>>> > to your LM specification line in moses.ini
>>>>>> > and also add a second weight with value "0" to the corresponding LM
>>>>>> > weight setting.
>>>>>> >
>>>>>> > This will then only use the scores
>>>>>> > p(the|<s>)
>>>>>> > p(house|<s>,the,<unk>) ---> backoff to p(house)
>>>>>> > p(in|<s>,the,<unk>,house,<unk>) ---> backoff to p(in)
>>>>>> >
>>>>>> > -phi
>>>>>> >
>>>>>> > On Thu, Jan 14, 2016 at 8:25 AM, LUONG NGOC Quang
>>>>>> > <quangngocluong@gmail.com <mailto:quangngocluong@gmail.com>> wrote:
>>>>>> >
>>>>>> > Dear All,
>>>>>> >
>>>>>> > I am currently using a SRILM Language Model (LM) in my Moses
>>>>>> > decoder. Does anyone know how can I ask the decoder, at the
>>>>>> decoding
>>>>>> > time, skip all out-of-vocabulary words when computing the LM
>>>>>> score
>>>>>> > (instead of doing back-off)?
>>>>>> >
>>>>>> > For instance, with the n-gram: "the <unk> house <unk> in", I
>>>>>> would
>>>>>> > like the decoder to assign it the probability of the phrase:
>>>>>> "the
>>>>>> > house in" (existing in the LM).
>>>>>> >
>>>>>> > Do I need more options/declarations in moses.ini file?
>>>>>> >
>>>>>> > Any help is very much appreciated,
>>>>>> >
>>>>>> > Best,
>>>>>> > Quang
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> > _______________________________________________
>>>>>> > Moses-support mailing list
>>>>>> > Moses-support@mit.edu <mailto:Moses-support@mit.edu>
>>>>>> > http://mailman.mit.edu/mailman/listinfo/moses-support
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> >
>>>>>> > _______________________________________________
>>>>>> > Moses-support mailing list
>>>>>> > Moses-support@mit.edu
>>>>>> > http://mailman.mit.edu/mailman/listinfo/moses-support
>>>>>> >
>>>>>> _______________________________________________
>>>>>> Moses-support mailing list
>>>>>> Moses-support@mit.edu
>>>>>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Moses-support mailing list
>>>>> Moses-support@mit.edu
>>>>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Best regards!
>>>>
>>>> Jie Jiang
>>>>
>>>>
>>>
>>
>>
>> --
>>
>> Best regards!
>>
>> Jie Jiang
>>
>>
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>
--
Luong Ngoc Quang
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20160118/61ec6462/attachment.html
------------------------------
_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support
End of Moses-support Digest, Vol 111, Issue 55
**********************************************
Subscribe to:
Post Comments (Atom)
0 Response to "Moses-support Digest, Vol 111, Issue 55"
Post a Comment