Moses-support Digest, Vol 119, Issue 33

Send Moses-support mailing list submissions to
moses-support@mit.edu

To subscribe or unsubscribe via the World Wide Web, visit
http://mailman.mit.edu/mailman/listinfo/moses-support
or, via email, send a message with subject or body 'help' to
moses-support-request@mit.edu

You can reach the person managing the list at
moses-support-owner@mit.edu

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Moses-support digest..."


Today's Topics:

1. Re: accessing the compact format of the phrase-table (Tom Hoar)
2. Re: accessing the compact format of the phrase-table
(Dimitar Shterionov)
3. Final CFP: 1st Workshop on Speech-Centric Natural Language
Processing (Nicholas Ruiz)


----------------------------------------------------------------------

Message: 1
Date: Fri, 23 Sep 2016 18:10:14 +0700
From: Tom Hoar <tahoar@pttools.net>
Subject: Re: [Moses-support] accessing the compact format of the
phrase-table
To: moses-support@mit.edu
Message-ID: <6e0f87f6-6d34-c9a9-f785-98fccd09e8f7@pttools.net>
Content-Type: text/plain; charset="windows-1252"

Sorry Dimitar,

I understand. I can't think of a way. If you have any of the output
stages of train-model.perl (assuming that's what you're using), you
could resume at a downstream step, for example `--do-steps 4-9`. Since
you're saving disk space, I wouldn't expect you saved things like the
GIZA word alignment files, etc. Rerunning train-model.perl from the
beginning might be your only way, assuming you saved the original corpus.

Best regards,

Tom Hoar
Chief Executive Officer
*/Precision Translation Tools Pte Ltd/*
Singapore/Thailand
Web: www.precisiontranslationtools.com
<http://www.precisiontranslationtools.com>
Thailand Mobile: +66 87 345-1875
Skype call: tahoar <skype:tahoar?call>
Skype chat: tahoar <skype:tahoar>


On 9/23/2016 4:45 PM, moses-support-request@mit.edu wrote:
> Date: Fri, 23 Sep 2016 10:44:57 +0100
> From: Dimitar Shterionov<dimitars@kantanmt.com>
> Subject: Re: [Moses-support] accessing the compact format of the
> phrase-table
> To: Tom Hoar<tahoar@pttools.net>
> Cc:moses-support@mit.edu
>
> Hello Tom,
>
> When I store the phrase-table for later use I store only the compact
> version - it's simply smaller.
>
> Is there another way to get the original text phrase-table without
> rebuilding?
>
> Cheers,
> Dimitar.
>
> Dimitar Shterionov |dimitars@kantanmt.com | Machine Translation Researcher
>
> www.KantanMT.com <http://www.kantanmt.com/> | Easy Translation - No
> Software. No Hardware. No Hassle MT.
>
> <https://www.facebook.com/KantanMT>
> <https://plus.google.com/+Kantanmt_cloudmachinetranslation>
> <https://twitter.com/KantanMT> <https://www.linkedin.com/company/kantanmt>
> <http://www.slideshare.net/kantanmt> <https://www.youtube.com/user/KantanMT>
> <http://kantanmtblog.com/> <https://kantanmt.com/rssfeeds.php>
> <info@kantanmt.com>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20160923/257e6ea7/attachment-0001.html

------------------------------

Message: 2
Date: Fri, 23 Sep 2016 12:42:55 +0100
From: Dimitar Shterionov <dimitars@kantanmt.com>
Subject: Re: [Moses-support] accessing the compact format of the
phrase-table
To: Tom Hoar <tahoar@pttools.net>
Cc: moses-support@mit.edu
Message-ID:
<CALi0KgEOGuUDAeeiqnd7rJM0dOpDg91RTDWX-AbjiNeieE6t_w@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

Hello Tom,

Yes, I guess I will have to rebuild. I have the corpus so it will be ok...

Thanks a lot.

Cheers,
Dimitar.

Dimitar Shterionov | dimitars@kantanmt.com | Machine Translation Researcher

www.KantanMT.com <http://www.kantanmt.com/> | Easy Translation - No
Software. No Hardware. No Hassle MT.

<https://www.facebook.com/KantanMT>
<https://plus.google.com/+Kantanmt_cloudmachinetranslation>
<https://twitter.com/KantanMT> <https://www.linkedin.com/company/kantanmt>
<http://www.slideshare.net/kantanmt> <https://www.youtube.com/user/KantanMT>
<http://kantanmtblog.com/> <https://kantanmt.com/rssfeeds.php>
<info@kantanmt.com>


On 23 September 2016 at 12:10, Tom Hoar <tahoar@pttools.net> wrote:

> Sorry Dimitar,
>
> I understand. I can't think of a way. If you have any of the output stages
> of train-model.perl (assuming that's what you're using), you could resume
> at a downstream step, for example `--do-steps 4-9`. Since you're saving
> disk space, I wouldn't expect you saved things like the GIZA word alignment
> files, etc. Rerunning train-model.perl from the beginning might be your
> only way, assuming you saved the original corpus.
> Best regards,
>
> Tom Hoar
> Chief Executive Officer
> *Precision Translation Tools Pte Ltd*
> Singapore/Thailand
> Web: www.precisiontranslationtools.com
> Thailand Mobile: +66 87 345-1875
> Skype call: tahoar
> Skype chat: tahoar
>
>
> On 9/23/2016 4:45 PM, moses-support-request@mit.edu wrote:
>
> Date: Fri, 23 Sep 2016 10:44:57 +0100
> From: Dimitar Shterionov <dimitars@kantanmt.com> <dimitars@kantanmt.com>
> Subject: Re: [Moses-support] accessing the compact format of the
> phrase-table
> To: Tom Hoar <tahoar@pttools.net> <tahoar@pttools.net>
> Cc: moses-support@mit.edu
>
> Hello Tom,
>
> When I store the phrase-table for later use I store only the compact
> version - it's simply smaller.
>
> Is there another way to get the original text phrase-table without
> rebuilding?
>
> Cheers,
> Dimitar.
>
> Dimitar Shterionov | dimitars@kantanmt.com | Machine Translation Researcher
> www.KantanMT.com <http://www.kantanmt.com/> <http://www.kantanmt.com/> | Easy Translation - No
> Software. No Hardware. No Hassle MT.
> <https://www.facebook.com/KantanMT> <https://www.facebook.com/KantanMT><https://plus.google.com/+Kantanmt_cloudmachinetranslation> <https://plus.google.com/+Kantanmt_cloudmachinetranslation><https://twitter.com/KantanMT> <https://twitter.com/KantanMT> <https://www.linkedin.com/company/kantanmt> <https://www.linkedin.com/company/kantanmt><http://www.slideshare.net/kantanmt> <http://www.slideshare.net/kantanmt> <https://www.youtube.com/user/KantanMT> <https://www.youtube.com/user/KantanMT><http://kantanmtblog.com/> <http://kantanmtblog.com/> <https://kantanmt.com/rssfeeds.php> <https://kantanmt.com/rssfeeds.php><info@kantanmt.com> <info@kantanmt.com>
>
>
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20160923/10c2b6f8/attachment-0001.html

------------------------------

Message: 3
Date: Fri, 23 Sep 2016 08:53:51 -0400
From: Nicholas Ruiz <nicruiz@fbk.eu>
Subject: [Moses-support] Final CFP: 1st Workshop on Speech-Centric
Natural Language Processing
To: moses-support@mit.edu
Message-ID:
<CAKa+0YM4bTVuCLikgkiz9H3z6=OfROKQUmGv7sBE1BpacPyfzQ@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

(apologies for cross-posts)

FINAL CALL FOR PAPERS: Extended Deadline 2nd October 2016
We are happy to introduce the 1st Workshop on Speech-Centric Natural
Language Processing (SCNLP), which will be held at at COLING 2016 in Osaka,
Japan!

SCNLP's goal is to unite the ASR and NLP communities to discuss new
frameworks for exploiting the rich information present in the speech signal
to improve the capabilities of natural language processing applications
such as conversational agents, question-answering systems, machine
translation, and search. In addition to acoustic environment information,
the audio signal may contain speaker-specific features which may identify
the emotional state, demographic information, and the presence of
uncertainty in the speaker?s utterance: features which may influence the
output of the NLP component. SCNLP encourages novel contributions that
revisit the conventional NLP problems with a focus on incorporating the
richness of spoken language, as well as contributions that promote
cross-fertilization between statistical methods for ASR and NLP.

We invite submissions of both long and short papers on original and
unpublished work. Similar to the main conference, submissions are limited
to 8 pages. All accepted submissions will be presented as posters.
Additionally, selected submissions will be presented orally.
Topics of interest include but are not limited to:

Joint ASR/NLP modeling using deep learning
Spoken query reformulation for Question/Answering systems
ASR error modeling and evaluation for NLP
Emotive Speech Synthesis for Spoken dialogue systems
Word-sense disambiguation for speech
Information extraction from speech transcripts
Domain adaptation (Adapting textual NLP training data to speech-centric
tasks)
Spoken language translation
Rich speech transcription
Disfluency and uncertainty detection
NLP with ASR lattices/confusion networks
Speech segmentation for NLP
Discourse and Speech Processing

All submissions should conform to COLING 2016 style guidelines, located
here:
http://coling2016.anlp.jp/#instructions
Long and short paper submissions must be anonymized. Please submit your
papers at https://www.softconf.com/coling2016/SCNLP/.

IMPORTANT DATES
October 2 2016: Workshop paper due
October 23 2016: Notification of acceptance
November 6 2016: Camera-ready due
November 30, 2016: Official proceedings publication date
December 11, 2016: Workshop date

WORKSHOP ORGANIZERS
Nicholas Ruiz (Interactions, USA)
Srinivas Bangalore (Interactions, USA)

PROGRAM COMMITTEE
Lo?c Barrault (Laboratoire d'Informatique de l'Universit? du Maine)
Frederic B?chet (Aix Marseille Universit?)
Francisco Casacuberta (Universitat Polit?cnica de Val?ncia)
Giuseppe di Fabbrizio (Amazon, USA)
Peter Heeman (Oregon Health & Science University, USA)
Julia Hirschberg (Columbia University, USA)
Tatsuya Kawahara (Kyoto University)
Gakuto Kurata (IBM Research, Tokyo)
Yang Liu (University of Texas at Dallas)
Yajie Miao (Carnegie Mellon University)
Alexandros Potamianos (National Technical University of Athens)
Giuseppe Riccardi (University of Trento)
Isabel Trancoso (L2F, Lisbon)
Jason Williams (Microsoft Research, USA)

WEBSITE
http://speechnlp.github.io/2016

CONTACT
scnlp {AT} interactions.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20160923/cb2873cc/attachment.html

------------------------------

_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


End of Moses-support Digest, Vol 119, Issue 33
**********************************************

0 Response to "Moses-support Digest, Vol 119, Issue 33"

Post a Comment