Send Moses-support mailing list submissions to
moses-support@mit.edu
To subscribe or unsubscribe via the World Wide Web, visit
http://mailman.mit.edu/mailman/listinfo/moses-support
or, via email, send a message with subject or body 'help' to
moses-support-request@mit.edu
You can reach the person managing the list at
moses-support-owner@mit.edu
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Moses-support digest..."
Today's Topics:
1. Re: --activate-features in mert-moses.perl not working?
(Rico Sennrich)
2. Deep neural networks for statistical machine translation:
several positions available (Postdoc, PhD, engineers, etc)
(Holger Schwenk)
3. Re: --activate-features in mert-moses.perl not working?
(Hieu Hoang)
4. COLING 2014 Update - Tutorials & Workshops Announced, Paper
Submission System Open (John Judge)
----------------------------------------------------------------------
Message: 1
Date: Mon, 10 Feb 2014 20:15:53 +0000 (UTC)
From: Rico Sennrich <rico.sennrich@gmx.ch>
Subject: Re: [Moses-support] --activate-features in mert-moses.perl
not working?
To: moses-support@mit.edu
Message-ID: <loom.20140210T210947-642@post.gmane.org>
Content-Type: text/plain; charset=us-ascii
Marcin Junczys-Dowmunt <junczys@...> writes:
>
> Hi,
> it seems --activate-features=STRING is not working in mert-moses.perl.
> The script prints a message that the ignored features are not being
> used, but then optimizes them anyway. I can see that the "enabled"
> information in the feature data structure is not being used anywhere in
> the script once it has been set (apart from printing the message).
I don't know too much about the --activate-features option myself, but in
recent Moses versions, you can add the option 'tuneable=false' to a feature
function in the config. The effect is that the feature score(s) won't be
reported to the n-best list, and MERT/MIRA/PRO won't even know that the
feature exists. The weight from the original config will be used for all
tuning iterations, and copied to the final config. You can now also specify
the weight of sparse features in the config, and this will override the
weight set in the weights file.
best wishes,
Rico
------------------------------
Message: 2
Date: Mon, 10 Feb 2014 22:08:09 +0100
From: Holger Schwenk <holger.schwenk@lium.univ-lemans.fr>
Subject: [Moses-support] Deep neural networks for statistical machine
translation: several positions available (Postdoc, PhD, engineers,
etc)
To: "moses-support@mit.edu" <moses-support@mit.edu>
Message-ID: <52F93FB9.8050107@lium.univ-lemans.fr>
Content-Type: text/plain; charset="iso-8859-1"
During the last years, there have been several breakthroughs in the use
of neural networks for natural language processing, in particular using
deep architectures.
The computer science laboratory of the University of Le Mans (LIUM) is
working since many years on statistical machine translation (SMT), and
we were among the first researchers to successfully use neural networks,
for instance continuous space language and translation models.
We want to substantially increase our research efforts in this area,
hoping to achieve a significant advances in SMT. Our goal is to build
state-of-the-art large scale SMT systems using deep neural networks.
In this major research effort, we have openings at different levels
- postdoc positions
- PhD positions
- short term visits with a well focused research project
- engineers
The candidates are expected to have demonstrated knowledge in at least
one of the following fields:
- neural networks (feed-forward, recurrent NN, deep learning, etc).
- statistical machine translation
- efficient implementation of machine learning algorithms (GPU, MPI, etc)
The positions are immediately available. Applications are accepted until
the positions are filled. Initial appointment is for one year,
renewable for up to three years. Competitive salaries are available,
including health care and other social benefits, travel support, etc.
The working language is English or French.
LIUM is participating in several international projects, financed by the
European Commission, DARPA and the French government. We collaborate
with leading research groups in USA and Europe.
A large computer cluster is available to support the research (700 CPU
cores with a total of 6 TBytes of memory and more than 250 TBytes of
RAID disk space). We also own a cluster with 30 Tesla K20 and K40 GPU
cards, connected by a fast Infiniband network.
Le Mans is located in between Paris and the Atlantic ocean. Both can be
reached in about 1 hour by high speed train. The Loire valley with many
wineries and other attractions is just a short drive away ...
Applications should include an CV, a list of publications and the name
of two references. We will invite interesting candidates for further
discussions.
For more information, please contact Holger Schwenk by email:
Holger.Schwenk@lium.univ-lemans.fr
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20140210/8b0cd8e1/attachment-0001.htm
------------------------------
Message: 3
Date: Mon, 10 Feb 2014 21:56:57 +0000
From: Hieu Hoang <hieuhoang@gmail.com>
Subject: Re: [Moses-support] --activate-features in mert-moses.perl
not working?
To: moses-support@mit.edu, Marcin Junczys-Dowmunt <junczys@amu.edu.pl>
Message-ID: <52F94B29.1050302@gmail.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
You CAN remove Distortion from the ini file, there will be no distortion
score. There may still be reordering, this is controlled by the section
[distortion-limit].
Or you can do what Rico suggested:
[feature]
Distortion tuneable=false
[weight]
Distortion0= 0
fyi - the only obligatory feature functions are:
1. UnknownWordPenalty
2. InputFeature (for lattices/confusion networks)
It may be a good idea not to hardcode these in future
On 10/02/2014 19:59, Marcin Junczys-Dowmunt wrote:
> W dniu 10.02.2014 20:46, Barry Haddow pisze:
> Ah, by the way, is removing the Distortion feature from the ini file and
> setting the limit to 1 a safe way to actually disable distortion? Moses
> does not complain (I always thought it is required.)
> Best,
> Marcin
>
>> Hi Marcin
>>
>> I had some fun with --activate-features in the past - I think the
>> syntax was rather strange. If it is not working now, it may have got
>> dropped by the recent refactoring
>>
>> My advice would be to use kbmira (or pro), since they are regularised
>> they don't go crazy when there is an uninformative feature. That way,
>> you don't have to fiddle with feature activation,
>>
>> cheers - Barry
>>
>> On 10/02/14 18:01, Marcin Junczys-Dowmunt wrote:
>>> Hi,
>>> it seems --activate-features=STRING is not working in mert-moses.perl.
>>> The script prints a message that the ignored features are not being
>>> used, but then optimizes them anyway. I can see that the "enabled"
>>> information in the feature data structure is not being used anywhere in
>>> the script once it has been set (apart from printing the message).
>>>
>>> This can cause an interesting catastrophe when, for instance, distortion
>>> is disabled by setting the limit to 1:
>>> MERT assigns a weight of 1 to distortion (but the feature itself is
>>> always 0) and 0 weights to all other features, the final score is then
>>> equal to 0 for all sentences and poor moses goes crazy generating lots
>>> of garbage which in turn takes ages to score only to finish with bad
>>> weights. Really ugly, took me a while to find the cause :)
>>>
>>> BTW. In my opinon a --deactive-features might be more useful. I would
>>> add/correct it myself, but currently I am getting lost in the code that
>>> is printing the config files. Someone more acquainted with that code?
>>> Best,
>>> Marcin
>>> _______________________________________________
>>> Moses-support mailing list
>>> Moses-support@mit.edu
>>> http://mailman.mit.edu/mailman/listinfo/moses-support
>>>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
------------------------------
Message: 4
Date: Tue, 11 Feb 2014 12:25:34 +0000
From: John Judge <jjudge@computing.dcu.ie>
Subject: [Moses-support] COLING 2014 Update - Tutorials & Workshops
Announced, Paper Submission System Open
To: John Judge <jjudge@computing.dcu.ie>
Message-ID: <52FA16BE.5000801@computing.dcu.ie>
Content-Type: text/plain; charset=windows-1252; format=flowed
********** Apologies for cross-posting **********
COLING 2014
Dublin, Ireland, 23-29 August, 2014
COLING 2014 (http://www.coling-2014.org), the 25th International
Conference on Computational Linguistics, will be organised by CNGL
(Centre for Global Intelligent Content) at the Helix Convention Centre
at Dublin City University (DCU) from 23-29 August 2014. The COLING
conference is organised under the auspices of the International
Committee on Computational Linguistics (ICCL).
Accepted Tutorials and Workshops
We are pleased to announce that the workshops and tutorials for COLING
2014 have been finalised. In total there will be 6 tutorials and 18
workshops covering a wide range of topics. You will be able to register
for tutorials and workshops when the main conference registration opens.
Accepted Tutorials:
? Dependency Parsing: Past, Present, and Future - Wenliang Chen,
Zhenghua Li and Min Zhang
? Using Neural Networks for Modeling and Representing Natural Languages-
Thomas Mikolov
? Multilingual Word Sense Disambiguation and Entity Linking - Roberto
Navigli and Andrea Moro
? Selection Bias, Label Bias, and Bias in Ground Truth - Anders S?gaard,
Barbara Plank and Dick Hovy
? Automated Grammatical Error Correction for Language Learners - Joel
Tetreault and Claudia Leacock
? Biomedical/clinical NLP - Ozlem Uzuner and Meliha Yetisgen
Accepted Workshops:
? Workshop on Lexical and Grammatical Resources for Language Processing
? First Joint Workshop on Statistical Parsing of Morphologically Rich
Languages and Syntactic Analysis of Non-Canonical Languages (SPMRL-SANCL)
? Celtic Language Technology Workshop (CLTW)
? 5th Workshop on South and Southeast Asian Natural Language Processing
(WSSANLP)
? Synchronic and Diachronic Approaches to Analyzing Technical Language
(SADAATL)
? The 3rd Workshop on Vision and Language (VL'14)
? First Joint Workshop on Multidisciplinary Approaches to Big Social
Data Analysis and Social NLP (MABSDA-SocialNLP)
? First Joint Workshop on Open Infrastructures for HLT Resource
Processing and Development and Text Analysis Frameworks
? Cognitive Aspects of the Lexicon (CogALex-IV)
? Semantic Web and Information Extraction (SWAIE)
? VarDial: Workshop on Applying NLP Tools to Similar Languages,
Varieties and Dialects (VarDial)
? SemEval-2014: Semantic Evaluation Exercises - International Workshop
on Semantic Evaluation (SemEval)
? The AHA!-Workshop on Information Discovery in Text (AHA!)
? CompuTerm 2014. 4th International Workshop on Computational
Terminology (CompuTerm)
? The First Workshop on Computational Approaches to Compound Analysis
(ComAComA)
? Automatic Text Simplification? Methods and Applications in the
Multilingual Society (ATS-MA)
? The 8th Linguistic Annotation Workshop (The LAW VIII)
? The third Joint Conference on Lexical and Computational Semantics
(*SEM-2014)
Call for Papers:
The first call for papers for the main conference is open. The paper
submission system is now open at
https://www.softconf.com/coling2014/main/. Papers should be prepared
using the paper templates available here
http://www.coling-2014.org/doc/coling2014.zip.
We invite authors to submit papers in all relevant areas and encourage
authors to include analysis of the influence of theories (intuitions,
methodologies, insights), to technologies (computational algorithms,
methods, tools, data), and/or contributions of technologies to theory
development. Contributions that display and rigorously discuss future
potential, even if not (yet) attested in standard evaluation, are
welcome. For more information and instructions for authors see the
preliminary call for papers here
http://www.coling-2014.org/call-for-papers.php.
Sponsorship and Promotion Opportunities
With the tremendous impact and growth of COLING over many years, we have
set out a comprehensive offering of corporate sponsorship and support
opportunities for industry participants. The COLING 2014 organising
committee is committed to working in close partnership with corporate
supporters to maximise the benefits of sponsorship and provide
opportunities for sponsors to engage appropriately with COLING
delegates. With this in mind, if you wish to explore additional options
for support or means of engagement with the COLING 2014 community please
feel free to contact sponsorship@coling-2014.org.
--
John Judge
Research Fellow
CNGL - The Centre for Global Intelligent Content
COLING 2014 Local Chair
Email: jjudge@computing.dcu.ie
Phone: +353 1 700 6729
Skype: jjudge2
http://www.cngl.ie
http://www.meta-net.eu
http://www.coling-2014.org
Email Disclaimer
"This email and any files transmitted with it are confidential and are
intended solely for use by the addressee. Any unauthorised
dissemination, distribution or copying of this message and any
attachments is strictly prohibited. If you have received this email in
error please notify the sender and delete the message. Any views or
opinions presented in this email may solely be the views of the author
and cannot be relied upon as being those of Dublin City University.
E-mail communications such as this cannot be guaranteed to be virus
free, timely, secure or error free and Dublin City University do not
accept liability for any such matters or their consequences. Please
consider the environment before printing this Email."
------------------------------
_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support
End of Moses-support Digest, Vol 88, Issue 20
*********************************************
Subscribe to:
Post Comments (Atom)
0 Response to "Moses-support Digest, Vol 88, Issue 20"
Post a Comment