Moses-support Digest, Vol 89, Issue 25

Send Moses-support mailing list submissions to
moses-support@mit.edu

To subscribe or unsubscribe via the World Wide Web, visit
http://mailman.mit.edu/mailman/listinfo/moses-support
or, via email, send a message with subject or body 'help' to
moses-support-request@mit.edu

You can reach the person managing the list at
moses-support-owner@mit.edu

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Moses-support digest..."


Today's Topics:

1. Re: Problem while running mert-moses.pl (Valters Sics)
2. Re: Multiple reference for tuning (Hieu Hoang)


----------------------------------------------------------------------

Message: 1
Date: Wed, 12 Mar 2014 09:57:17 +0200
From: Valters Sics <Valters.Sics@tilde.lv>
Subject: Re: [Moses-support] Problem while running mert-moses.pl
To: "moses-support@mit.edu" <moses-support@mit.edu>
Message-ID:
<AC6FD4BB9BB02540AC7322091A6C3B54731A5819A6@postal.Tilde.lv>
Content-Type: text/plain; charset="us-ascii"

Hi Allan,

Moses decoder get killed while loading phrase table. Probably you do not have enough RAM to load it.
If that's the case then consider binarizing language model and phrase table, see advanced feature in moses page.

Best,
Valters

From: moses-support-bounces@mit.edu [mailto:moses-support-bounces@mit.edu] On Behalf Of Allan Jie
Sent: Wednesday, March 12, 2014 7:34 AM
To: moses-support@mit.edu
Subject: [Moses-support] Problem while running mert-moses.pl

Dear all,

I try to run tuning using this command.
nohup nice /home/wei_lu/tools/mosesdecoder/scripts/training/mert-moses.pl<http://mert-moses.pl> \
/home/wei_lu/experiment/data/dev/dev2010.lowercased.indexed.zh /home/wei_lu/experiment/data/dev/dev2010.lowercased.en \
/home/wei_lu/tools/mosesdecoder/bin/moses /home/wei_lu/pku_seg/model/moses.ini --working-dir /home/wei_lu/pku_seg/mert-work-index --mertdir /home/wei_lu/tools/mosesdecoder/bin/ -decoder-flags="-threads 4"

But the result I got fail, here is the log.

Using SCRIPTS_ROOTDIR: /home/wei_lu/tools/mosesdecoder/scripts
filtering the phrase tables... Wed Mar 12 12:18:43 SGT 2014
exec: /home/wei_lu/tools/mosesdecoder/scripts/training/filter-model-given-input.pl<http://filter-model-given-input.pl> ./filtered /home/wei_lu/pku_seg/model/moses.ini /home/wei_lu/experiment/data/dev/dev2010.lowercased.indexed.zh
Executing: /home/wei_lu/tools/mosesdecoder/scripts/training/filter-model-given-input.pl<http://filter-model-given-input.pl> ./filtered /home/wei_lu/pku_seg/model/moses.ini /home/wei_lu/experiment/data/dev/dev2010.lowercased.indexed.zh > filterphrases.out 2> filterphrases.err
Asking moses for feature names and values from filtered/moses.ini
Executing: /home/wei_lu/tools/mosesdecoder/bin/moses -threads 4 -config filtered/moses.ini -inputtype 0 -show-weights > ./features.list
Defined parameters (per moses.ini or switch):
config: filtered/moses.ini
distortion-limit: 6
feature: UnknownWordPenalty WordPenalty PhrasePenalty PhraseDictionaryMemory name=TranslationModel0 table-limit=20 num-features=6 path=/home/wei_lu/pku_seg/mert-work-index/filtered/phrase-table.0-0.1.1.gz input-factor=0 output-factor=0 LexicalReordering name=LexicalReordering0 num-features=6 type=wbe-msd-bidirectional-fe-allff input-factor=0 output-factor=0 path=/home/wei_lu/pku_seg/mert-work-index/filtered/reordering-table.wbe-msd-bidirectional-fe Distortion KENLM lazyken=0 name=LM0 factor=0 path=/home/wei_lu/experiment/lm/allan_lm.lm order=3
input-factors: 0
inputtype: 0
mapping: 0 T 0
show-weights:
threads: 4
weight: UnknownWordPenalty0= 1 WordPenalty0= -1 PhrasePenalty0= 0.2 TranslationModel0= 0.2 0.2 0.2 0.2 0.2 0.2 LexicalReordering0= 0.3 0.3 0.3 0.3 0.3 0.3 Distortion0= 0.3 LM0= 0.5
/home/wei_lu/tools/mosesdecoder/bin
line=UnknownWordPenalty
FeatureFunction: UnknownWordPenalty0 start: 0 end: 0
line=WordPenalty
FeatureFunction: WordPenalty0 start: 1 end: 1
line=PhrasePenalty
FeatureFunction: PhrasePenalty0 start: 2 end: 2
line=PhraseDictionaryMemory name=TranslationModel0 table-limit=20 num-features=6 path=/home/wei_lu/pku_seg/mert-work-index/filtered/phrase-table.0-0.1.1.gz input-factor=0 output-factor=0
FeatureFunction: TranslationModel0 start: 3 end: 8
line=LexicalReordering name=LexicalReordering0 num-features=6 type=wbe-msd-bidirectional-fe-allff input-factor=0 output-factor=0 path=/home/wei_lu/pku_seg/mert-work-index/filtered/reordering-table.wbe-msd-bidirectional-fe
FeatureFunction: LexicalReordering0 start: 9 end: 14
Initializing LexicalReordering..
line=Distortion
FeatureFunction: Distortion0 start: 15 end: 15
line=KENLM lazyken=0 name=LM0 factor=0 path=/home/wei_lu/experiment/lm/allan_lm.lm order=3
FeatureFunction: LM0 start: 16 end: 16
Loading the LM will be faster if you build a binary file.
Reading /home/wei_lu/experiment/lm/allan_lm.lm
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
****************************************************************************************************
Loading table into memory...done.
Start loading text SCFG phrase table. Moses format : [1507.000] seconds
Reading /home/wei_lu/pku_seg/mert-work-index/filtered/phrase-table.0-0.1.1.gz
----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100
****sh: line 1: 49997 Killed /home/wei_lu/tools/mosesdecoder/bin/moses -threads 4 -config filtered/moses.ini -inputtype 0 -show-weights > ./features.list
Exit code: 137
Failed to run moses with the config filtered/moses.ini at /home/wei_lu/tools/mosesdecoder/scripts/training/mert-moses.pl<http://mert-moses.pl> line 1271.

I have no idea why this happened.
And then I try to run a small dataset(only one sentence in the tuning instance and relevant phrase-table), that works fine.

I don't know what's this problem.

Best Regards,
Allan

--
Research student, final-year undergraduate
Extreme Scale Network Computing and service laboratory(http://www.cloud-uestc.cn/cloud-uestc-EN/index.html),
School of Computer Science & Engineering,
University of Electronic Science and Technology of China(http://www.oice.uestc.edu.cn/en/).
Website: http://www.allanjie.ml/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20140312/7e7f0caa/attachment-0001.htm

------------------------------

Message: 2
Date: Wed, 12 Mar 2014 12:35:05 +0000
From: Hieu Hoang <Hieu.Hoang@ed.ac.uk>
Subject: Re: [Moses-support] Multiple reference for tuning
To: WANG Lingxiao <wanglingxiao0216@gmail.com>
Cc: "moses-support@mit.edu" <moses-support@mit.edu>
Message-ID:
<CAEKMkbhb9zT4NksA3Z-KYtiuXVSKE8CqrBs1ey-ReQw0h7cO3g@mail.gmail.com>
Content-Type: text/plain; charset="iso-8859-1"

I'm not very sure, but it might work if you put your references all on 1
line
raw-reference = "$wmt12-data/dev/reference1 $wmt12-data/dev/reference2
$wmt12-data/dev/reference3"



On 3 March 2014 21:57, WANG Lingxiao <wanglingxiao0216@gmail.com> wrote:

> Hi,
>
> I use ems to train Moses, and i want to specify multiple references to
> mert-moses.pl.
> I found some info in the page
> http://www.statmt.org/moses/?n=FactoredTraining.Tuning.
> I named 4 files (reference0 reference1 rsference2 reference3),
> and my config.hierarchical adds raw-reference = $wmt12-data/dev/reference
> But ems can not find references.
>
> thanks for your help and best regards
>
> Lingxiao WANG
>
>
>
>
> This is my log :
>
> -------------- TUNING_tokenize-reference.1.DONE --------------
> #!/bin/bash
>
>
> PATH="/opt/local/bin:/opt/local/sbin:/opt/subversion/bin:/Users/lingxiaowang/bin:/Users/lingxiaowang/srilm/bin/macosx:/Users/lingxiaowang/srilm/bin:/usr/local/srilm/bin/macox:/usr/local/srilm/bin:/opt/local/bin:/opt/local/sbin:/opt/local/libexec/gnubin:/opt/local/bin:/opt/local/sbin:/opt/subversion/bin:/Users/lingxiaowang/bin:/bin/macosx:/bin:/usr/local/srilm/bin/macox:/usr/local/srilm/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/texbin"
> cd /Users/lingxiaowang/experiment
> echo 'starting at '`date`' on '`hostname`
> mkdir -p /Users/lingxiaowang/experiment/tuning
>
> mkdir -p /Users/lingxiaowang/experiment/tuning
> /Users/lingxiaowang/mosesdecoder/scripts/tokenizer/tokenizer.perl -a -l en
> < /Users/lingxiaowang/experiment/data/dev/reference >
> /Users/lingxiaowang/experiment/tuning/reference.tok.1
>
>
> echo 'finished at '`date`
> touch
> /Users/lingxiaowang/experiment/steps/1/TUNING_tokenize-reference.1.DONE
>
> -------------- TUNING_tokenize-reference.1.STDERR --------------
>
> /Users/lingxiaowang/experiment/steps/1/TUNING_tokenize-reference.1: line
> 9: /Users/lingxiaowang/experiment/data/dev/reference: No such file or
> directory
>
>
> -------------- TUNING_tokenize-reference.1.INFO --------------
>
> TUNING:raw-reference = /Users/lingxiaowang/experiment/data/dev/reference
> INPUT = USED /Users/lingxiaowang/experiment/data/dev/reference
> output-tokenizer =
> /Users/lingxiaowang/mosesdecoder/scripts/tokenizer/tokenizer.perl -a -l en [
> 1390307282]
>
>
> -------------- -------------- config.hierarchical-------------- --------------
>
> ################################################
> ### CONFIGURATION FILE FOR AN SMT EXPERIMENT ###
> ################################################
>
> [GENERAL]
>
> ### directory in which experiment is run
> #
> working-dir = /Users/lingxiaowang/experiment
>
> # specification of the language pair
> input-extension = zh
> output-extension = en
> pair-extension = zh-en
>
> ### directories that contain tools and data
> #
> # moses
> moses-src-dir = /Users/lingxiaowang/mosesdecoder
> #
> # moses binaries
> moses-bin-dir = $moses-src-dir/bin
> #
> # moses scripts
> moses-script-dir = $moses-src-dir/scripts
> #
> # directory where GIZA++/MGIZA programs resides
> external-bin-dir = /Users/lingxiaowang/mosesdecoder/tools
> #
> # srilm
> #srilm-dir = $moses-src-dir/srilm/bin/i686
> #
> # irstlm
> irstlm-dir = $moses-src-dir/irstlm/bin
> #
> # randlm
> #randlm-dir = $moses-src-dir/randlm/bin
> #
> # data
> wmt12-data = $working-dir/data
>
> ### basic tools
> #
> # moses decoder
> decoder = $moses-bin-dir/moses_chart
>
> # conversion of phrase table into binary on-disk format
> #ttable-binarizer = $moses-bin-dir/processPhraseTable
>
> # conversion of rule table into binary on-disk format
> ttable-binarizer = "$moses-bin-dir/CreateOnDiskPt 1 1 4 100 2"
>
> # tokenizers - comment out if all your data is already tokenized
> input-tokenizer = "$moses-script-dir/tokenizer/tokenizer.perl -a -l
> $input-extension"
> output-tokenizer = "$moses-script-dir/tokenizer/tokenizer.perl -a -l
> $output-extension"
>
> # truecasers - comment out if you do not use the truecaser
> input-truecaser = $moses-script-dir/recaser/truecase.perl
> output-truecaser = $moses-script-dir/recaser/truecase.perl
> detruecaser = $moses-script-dir/recaser/detruecase.perl
>
> ### generic parallelizer for cluster and multi-core machines
> # you may specify a script that allows the parallel execution
> # parallizable steps (see meta file). you also need specify
> # the number of jobs (cluster) or cores (multicore)
> #
> #generic-parallelizer =
> $moses-script-dir/ems/support/generic-parallelizer.perl
> #generic-parallelizer =
> $moses-script-dir/ems/support/generic-multicore-parallelizer.perl
>
> ### cluster settings (if run on a cluster machine)
> # number of jobs to be submitted in parallel
> #
> #jobs = 10
>
> # arguments to qsub when scheduling a job
> #qsub-settings = ""
>
> # project for priviledges and usage accounting
> #qsub-project = iccs_smt
>
> # memory and time
> #qsub-memory = 4
> #qsub-hours = 48
>
> ### multi-core settings
> # when the generic parallelizer is used, the number of cores
> # specified here
> cores = 8
>
> #################################################################
> # PARALLEL CORPUS PREPARATION:
> # create a tokenized, sentence-aligned corpus, ready for training
>
> [CORPUS]
>
> ### long sentences are filtered out, since they slow down GIZA++
> # and are a less reliable source of data. set here the maximum
> # length of a sentence
> #
> max-sentence-length = 80
>
> [CORPUS:europarl] IGNORE
>
> ### command to run to get raw corpus files
> #
> # get-corpus-script =
>
> ### raw corpus files (untokenized, but sentence aligned)
> # raw-stem = $wmt12-data/training/ze.$pair-extension
>
> ### tokenized corpus files (may contain long sentences)
> #
> #tokenized-stem =
>
> ### if sentence filtering should be skipped,
> # point to the clean training data
> #
> #clean-stem =
>
> ### if corpus preparation should be skipped,
> # point to the prepared training data
> #
> #lowercased-stem =
>
> [CORPUS:ZE]
> raw-stem = $wmt12-data/training/ze.$pair-extension
>
>
>
> #################################################################
> # LANGUAGE MODEL TRAINING
>
> [LM]
>
> ### tool to be used for language model training
> # srilm
> #lm-training = $srilm-dir/ngram-count
> #settings = "-interpolate -kndiscount -unk"
>
> # irstlm training
> # msb = modified kneser ney; p=0 no singleton pruning
> lm-training = "$moses-script-dir/generic/trainlm-irst2.perl -cores $cores
> -irst-dir $irstlm-dir -temp-dir $working-dir/tmp"
> settings = "-s msb -p 0"
>
> # order of the language model
> order = 5
>
> ### tool to be used for training randomized language model from scratch
> # (more commonly, a SRILM is trained)
> #
> #rlm-training = "$randlm-dir/buildlm -falsepos 8 -values 8"
>
> ### script to use for binary table format for irstlm or kenlm
> # (default: no binarization)
>
> # irstlm
> #lm-binarizer = $irstlm-dir/compile-lm
>
> # kenlm, also set type to 8
> #lm-binarizer = $moses-bin-dir/build_binary
> #type = 8
>
> ### script to create quantized language model format (irstlm)
> # (default: no quantization)
> #
> #lm-quantizer = $irstlm-dir/quantize-lm
>
> ### script to use for converting into randomized table format
> # (default: no randomization)
> #
> #lm-randomizer = "$randlm-dir/buildlm -falsepos 8 -values 8"
>
> ### each language model to be used has its own section here
>
> [LM:europarl] IGNORE
>
> ### command to run to get raw corpus files
> #
> #get-corpus-script = ""
>
> ### raw corpus (untokenized)
> #
> raw-corpus = $wmt12-data/training/europarl-v7.$output-extension
>
> ### tokenized corpus files (may contain long sentences)
> #
> #tokenized-corpus =
>
> ### if corpus preparation should be skipped,
> # point to the prepared language model
> #
> #lm =
>
> [LM:ZE]
> raw-corpus = $wmt12-data/training/ze.$pair-extension.$output-extension
>
>
>
>
> #################################################################
> # INTERPOLATING LANGUAGE MODELS
>
> [INTERPOLATED-LM] IGNORE
>
> # if multiple language models are used, these may be combined
> # by optimizing perplexity on a tuning set
> # see, for instance [Koehn and Schwenk, IJCNLP 2008]
>
> ### script to interpolate language models
> # if commented out, no interpolation is performed
> #
> script = $moses-script-dir/ems/support/interpolate-lm.perl
>
> ### tuning set
> # you may use the same set that is used for mert tuning (reference set)
> #
> tuning-sgm = $wmt12-data/dev/newstest2010-ref.$output-extension.sgm
> #raw-tuning =
> #tokenized-tuning =
> #factored-tuning =
> #lowercased-tuning =
> #split-tuning =
>
> ### group language models for hierarchical interpolation
> # (flat interpolation is limited to 10 language models)
> #group = "first,second fourth,fifth"
>
> ### script to use for binary table format for irstlm or kenlm
> # (default: no binarization)
>
> # irstlm
> #lm-binarizer = $irstlm-dir/compile-lm
>
> # kenlm, also set type to 8
> lm-binarizer = $moses-bin-dir/build_binary
> type = 8
>
> ### script to create quantized language model format (irstlm)
> # (default: no quantization)
> #
> #lm-quantizer = $irstlm-dir/quantize-lm
>
> ### script to use for converting into randomized table format
> # (default: no randomization)
> #
> #lm-randomizer = "$randlm-dir/buildlm -falsepos 8 -values 8"
>
> #################################################################
> # MODIFIED MOORE LEWIS FILTERING
>
> [MML] IGNORE
>
> ### specifications for language models to be trained
> #
> #lm-training = $srilm-dir/ngram-count
> #lm-settings = "-interpolate -kndiscount -unk"
> #lm-binarizer = $moses-src-dir/bin/build_binary
> #lm-query = $moses-src-dir/bin/query
> #order = 5
>
> ### in-/out-of-domain source/target corpora to train the 4 language model
> #
> # in-domain: point either to a parallel corpus
> #outdomain-stem = [CORPUS:toy:clean-split-stem]
>
> # ... or to two separate monolingual corpora
> #indomain-target = [LM:toy:lowercased-corpus]
> #raw-indomain-source = $toy-data/nc-5k.$input-extension
>
> # point to out-of-domain parallel corpus
> #outdomain-stem = [CORPUS:giga:clean-split-stem]
>
> # settings: number of lines sampled from the corpora to train each
> language model on
> # (if used at all, should be small as a percentage of corpus)
> #settings = "--line-count 100000"
>
> #################################################################
> # TRANSLATION MODEL TRAINING
>
> [TRAINING]
>
> ### training script to be used: either a legacy script or
> # current moses training script (default)
> #
> script = $moses-script-dir/training/train-model.perl
>
> ### general options
> # these are options that are passed on to train-model.perl, for instance
> # * "-mgiza -mgiza-cpus 8" to use mgiza instead of giza
> # * "-sort-buffer-size 8G -sort-compress gzip" to reduce on-disk sorting
> # * "-sort-parallel 8 -cores 8" to speed up phrase table building
> #
> training-options = "-mgiza -mgiza-cpus 8"
>
> ### factored training: specify here which factors used
> # if none specified, single factor training is assumed
> # (one translation step, surface to surface)
> #
> #input-factors = word lemma pos morph
> #output-factors = word lemma pos
> #alignment-factors = "word -> word"
> #translation-factors = "word -> word"
> #reordering-factors = "word -> word"
> #generation-factors = "word -> pos"
> #decoding-steps = "t0, g0"
>
> ### parallelization of data preparation step
> # the two directions of the data preparation can be run in parallel
> # comment out if not needed
> #
> parallel = yes
>
> ### pre-computation for giza++
> # giza++ has a more efficient data structure that needs to be
> # initialized with snt2cooc. if run in parallel, this may reduces
> # memory requirements. set here the number of parts
> #
> #run-giza-in-parts = 5
>
> ### symmetrization method to obtain word alignments from giza output
> # (commonly used: grow-diag-final-and)
> #
> alignment-symmetrization-method = grow-diag-final-and
>
> ### use of Chris Dyer's fast align for word alignment
> #
> #fast-align-settings = "-d -o -v"
>
> ### use of berkeley aligner for word alignment
> #
> #use-berkeley = true
> #alignment-symmetrization-method = berkeley
> #berkeley-train = $moses-script-dir/ems/support/berkeley-train.sh
> #berkeley-process = $moses-script-dir/ems/support/berkeley-process.sh
> #berkeley-jar = /your/path/to/berkeleyaligner-1.1/berkeleyaligner.jar
> #berkeley-java-options = "-server -mx30000m -ea"
> #berkeley-training-options = "-Main.iters 5 5 -EMWordAligner.numThreads 8"
> #berkeley-process-options = "-EMWordAligner.numThreads 8"
> #berkeley-posterior = 0.5
>
> ### use of baseline alignment model (incremental training)
> #
> #baseline = 68
> #baseline-alignment-model =
> "$working-dir/training/prepared.$baseline/$input-extension.vcb \
> # $working-dir/training/prepared.$baseline/$output-extension.vcb \
> #
> $working-dir/training/giza.$baseline/${output-extension}-$input-extension.cooc
> \
> #
> $working-dir/training/giza-inverse.$baseline/${input-extension}-$output-extension.cooc
> \
> #
> $working-dir/training/giza.$baseline/${output-extension}-$input-extension.thmm.5
> \
> #
> $working-dir/training/giza.$baseline/${output-extension}-$input-extension.hhmm.5
> \
> #
> $working-dir/training/giza-inverse.$baseline/${input-extension}-$output-extension.thmm.5
> \
> #
> $working-dir/training/giza-inverse.$baseline/${input-extension}-$output-extension.hhmm.5"
>
> ### if word alignment should be skipped,
> # point to word alignment files
> #
> #word-alignment = $working-dir/model/aligned.1
>
> ### filtering some corpora with modified Moore-Lewis
> # specify corpora to be filtered and ratio to be kept, either before or
> after word alignment
> #mml-filter-corpora = toy
> #mml-before-wa = "-proportion 0.9"
> #mml-after-wa = "-proportion 0.9"
>
> ### create a bilingual concordancer for the model
> #
> #biconcor = $moses-bin-dir/biconcor
>
> ### lexicalized reordering: specify orientation type
> # (default: only distance-based reordering model)
> #
> #lexicalized-reordering = msd-bidirectional-fe
>
> ### hierarchical rule set
> #
> hierarchical-rule-set = true
>
> ### settings for rule extraction
> #
> #extract-settings = ""
>
> ### add extracted phrases from baseline model
> #
> #baseline-extract = $working-dir/model/extract.$baseline
> #
> # requires aligned parallel corpus for re-estimating lexical translation
> probabilities
> #baseline-corpus = $working-dir/training/corpus.$baseline
> #baseline-alignment =
> $working-dir/model/aligned.$baseline.$alignment-symmetrization-method
>
> ### unknown word labels (target syntax only)
> # enables use of unknown word labels during decoding
> # label file is generated during rule extraction
> #
> #use-unknown-word-labels = true
>
> ### if phrase extraction should be skipped,
> # point to stem for extract files
> #
> # extracted-phrases =
>
> ### settings for rule scoring
> #
> score-settings = "--GoodTuring"
>
> ### include word alignment in phrase table
> #
> #include-word-alignment-in-rules = yes
>
> ### sparse lexical features
> #
> #sparse-features = "target-word-insertion top 50, source-word-deletion top
> 50, word-translation top 50 50, phrase-length"
>
> ### domain adaptation settings
> # options: sparse, any of: indicator, subset, ratio
> #domain-features = "subset"
>
> ### if phrase table training should be skipped,
> # point to phrase translation table
> #
> # phrase-translation-table =
>
> ### if reordering table training should be skipped,
> # point to reordering table
> #
> # reordering-table =
>
> ### filtering the phrase table based on significance tests
> # Johnson, Martin, Foster and Kuhn. (2007): "Improving Translation Quality
> by Discarding Most of the Phrasetable"
> # options: -n number of translations; -l 'a+e', 'a-e', or a positive real
> value -log prob threshold
> #salm-index = /path/to/project/salm/Bin/Linux/Index/IndexSA.O64
> #sigtest-filter = "-l a+e -n 50"
>
> ### if training should be skipped,
> # point to a configuration file that contains
> # pointers to all relevant model files
> #
> #config-with-reused-weights =
>
> #####################################################
> ### TUNING: finding good weights for model components
>
> [TUNING]
>
> ### instead of tuning with this setting, old weights may be recycled
> # specify here an old configuration file with matching weights
> #
> #weight-config = $working-dir/tuning/moses.weight-reused.ini.1
>
> ### tuning script to be used
> #
> tuning-script = $moses-script-dir/training/mert-moses.pl
> tuning-settings = "-mertdir $moses-bin-dir"
>
> ### specify the corpus used for tuning
> # it should contain 1000s of sentences
> #
> input-sgm = $wmt12-data/dev/nist02/nist02src.sgm
> #input-sgm = $wmt12-data/eval/nist05.zh.sgm
> #raw-input =
> #tokenized-input = $wmt12-data/dev/anist.ref
> #factorized-input =
> #input =
>
>
>
> #reference-sgm = $wmt12-data/dev/nist02/nist02ref.sgm
> #reference-sgm = $wmt12-data/dev/tunerefsgm/tuneref1
> raw-reference = $wmt12-data/dev/reference
> #factorized-reference =
> #reference =
>
> ### size of n-best list used (typically 100)
> #
> nbest = 100
>
> ### ranges for weights for random initialization
> # if not specified, the tuning script will use generic ranges
> # it is not clear, if this matters
> #
> # lambda =
>
> ### additional flags for the filter script
> #
> filter-settings = ""
>
> ### additional flags for the decoder
> #
> decoder-settings = ""
>
> ### if tuning should be skipped, specify this here
> # and also point to a configuration file that contains
> # pointers to all relevant model files
> #
> #config =
>
> #########################################################
> ## RECASER: restore case, this part only trains the model
>
> [RECASING]
>
> #decoder = $moses-bin-dir/moses
>
> ### training data
> # raw input needs to be still tokenized,
> # also also tokenized input may be specified
> #
> #tokenized = [LM:europarl:tokenized-corpus]
>
> # recase-config =
>
> #lm-training = $srilm-dir/ngram-count
>
> #######################################################
> ## TRUECASER: train model to truecase corpora and input
>
> [TRUECASER]
>
> ### script to train truecaser models
> #
> trainer = $moses-script-dir/recaser/train-truecaser.perl
>
> ### training data
> # data on which truecaser is trained
> # if no training data is specified, parallel corpus is used
> #
> # raw-stem =
> # tokenized-stem =
>
> ### trained model
> #
> # truecase-model =
>
> ######################################################################
> ## EVALUATION: translating a test set using the tuned system and score it
>
> [EVALUATION]
>
> ### number of jobs (if parallel execution on cluster)
> #
> #jobs = 10
>
> ### additional flags for the filter script
> #
> #filter-settings = ""
>
> ### additional decoder settings
> # switches for the Moses decoder
> # common choices:
> # "-threads N" for multi-threading
> # "-mbr" for MBR decoding
> # "-drop-unknown" for dropping unknown source words
> # "-search-algorithm 1 -cube-pruning-pop-limit 5000 -s 5000" for cube
> pruning
> #
> #decoder-settings = ""
>
> ### specify size of n-best list, if produced
> #
> #nbest = 100
>
> ### multiple reference translations
> #
> multiref = yes
>
> ### prepare system output for scoring
> # this may include detokenization and wrapping output in sgm
> # (needed for nist-bleu, ter, meteor)
> #
> detokenizer = "$moses-script-dir/tokenizer/detokenizer.perl -l
> $output-extension"
> recaser = $moses-script-dir/recaser/recase.perl
> wrapping-script = "$moses-script-dir/ems/support/wrap-xml.perl
> $output-extension"
> #output-sgm =
>
> ### BLEU
> nist-bleu = $moses-script-dir/generic/mteval-v13a.pl
> nist-bleu-c = "$moses-script-dir/generic/mteval-v13a.pl -c"
> #multi-bleu = $moses-script-dir/generic/multi-bleu.perl
> #ibm-bleu =
>
> ### TER: translation error rate (BBN metric) based on edit distance
> # not yet integrated
> #
> # ter =
>
> ### METEOR: gives credit to stem / worknet synonym matches
> # not yet integrated
> #
> # meteor =
>
> ### Analysis: carry out various forms of analysis on the output
> #
> analysis = $moses-script-dir/ems/support/analysis.perl
> #
> # also report on input coverage
> analyze-coverage = yes
> #
> # also report on phrase mappings used
> report-segmentation = yes
> #
> # report precision of translations for each input word, broken down by
> # count of input word in corpus and model
> #report-precision-by-coverage = yes
> #
> # further precision breakdown by factor
> #precision-by-coverage-factor = pos
> #
> # visualization of the search graph in tree-based models
> #analyze-search-graph = yes
>
> [EVALUATION:NIST05]
>
> ### input data
> #
> input-sgm = $wmt12-data/eval/nist05.zh.sgm
> # raw-input =
> # tokenized-input =
> # factorized-input =
> # input =
>
> ### reference data
> #
> reference-sgm = $wmt12-data/eval/nist05.en.sgm
> # raw-reference =
> # tokenized-reference =
> # reference =
>
> ### analysis settings
> # may contain any of the general evaluation analysis settings
> # specific setting: base coverage statistics on earlier run
> #
> #precision-by-coverage-base = $working-dir/evaluation/test.analysis.5
>
> ### wrapping frame
> # for nist-bleu and other scoring scripts, the output needs to be wrapped
> # in sgm markup (typically like the input sgm)
> #
> wrapping-frame = $input-sgm
>
> ##########################################
> ### REPORTING: summarize evaluation scores
>
> [REPORTING]
>
> ### currently no parameters for reporting section
>
>
> _______________________________________________
> Moses-support mailing list
> Moses-support@mit.edu
> http://mailman.mit.edu/mailman/listinfo/moses-support
>
>


--
Hieu Hoang
Research Associate
University of Edinburgh
http://www.hoang.co.uk/hieu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mailman.mit.edu/mailman/private/moses-support/attachments/20140312/6ad0c862/attachment.htm

------------------------------

_______________________________________________
Moses-support mailing list
Moses-support@mit.edu
http://mailman.mit.edu/mailman/listinfo/moses-support


End of Moses-support Digest, Vol 89, Issue 25
*********************************************

0 Response to "Moses-support Digest, Vol 89, Issue 25"

Post a Comment