NEAL Home Page
Middlesex Logo

Code and Short Tutorial for Neural Parser

Code

Tutorial

    1. Get it running
    2. Grab the tarball, unzip, and untar it.
    3. I'm running in Ubuntu 14.04, with nest and spinnaker. You should get those running (or just nest if that's all you want).
    4. You should be able to test that the system runs by
      python testLangOneSentence.py
    5. Note: the tarball comes set to run in spinnaker. To change this comment out simulator=spinnaker in nealParams.py.
    6. This should generate two pkl files in the results directory. You can convert the pkl files to spike trains by
      python printPklFile.py results/parseState.pkl > sent0States.sp
      This is a text file of spikes (thus sp), that should end with neurons 30-37 firing until about 400 ms. (That's the final parse state of the first (0) sentence).
    1. Explanation of the files.
    2. All of the python files are for the pynn parser.
    3. testLangOneSentence.py: this tests the parsing system on one sentence. By default it is the 0 sentence, but you can specify another sentence.
    4. cABot3Lang.py: the testLang*.py files use this. It specifies the language to be parsed, and allocates the topology. When we want to parse sentences from a different language, we'll copy this and change it to specify the new language.
    5. parseClass.py: this is the base class for parsing. It should be generic across languages being parsed.
    6. stateMachineClass.py: the system is parsing a regular language, which can be defined by a stateMachine. So, this class is used; it's well tested and should work robustly for sentences.
    7. nealCoverClass.py: neal is the Neuromophic Embodied Agents that Learn project. The idea is that we write agents by combining modules (like this parser). Then we change one parameter (the simulator), and the system works on different platforms. Unfortunately some pyNN functions differ from simulator to simulator. These are included in nealCoverClass to reduce the amount of branching (e.g. if simulator == nest) in the modules.
    8. nealParams.py: this specifies the simulator, and is used for constructing agents.
    9. testLang.py: this is a testfile that runs all of the sentences.
    10. printPklFiles.py: a program to convert pkl files to spike data.
    11. runNestTests.sh: automated script for testing nest. (stored results not included in oct 1 tarball.)
    12. runSpinnTests.sh automated script for testing spinnaker. (stored results not included in oct 1 tarball.)
    13. testParseCABot3.py: an old test function
    1. Make a parser for a new language.
    2. cp cABot3Lang.py turnStepLang.py
    3. cp testLangOneSentence.py testTurnLang.py
    4. modify testTurnLang to import turnStepLang instead of cABot3Lang. (just change the import) It should now run and do just what testLangOneSentence did. You can do a diff on the translation of the pkl files.
    5. Pare out most of the stuff from testLangOneSentence. Delete the functions addMoveSentences, addGoSentences, and addCentreSentences, and where they're called in main. This should still run.
    6. Continue paring. Remove the explore and stop sentences in main and they're references. Remove all but the two first sentences in setupTestSentences. In addTurnSentences, remove all of the sentences but turn right. That's the first 11 lines, and the remaining ~ 50 lines after the second 11.
    7. Fix up the two global variables. In turnStepLang.py, you removed the variable setting of lengthOfLongestSentence (from addMoveSentence). In main, declare it global variable, and set it to 2. (I did this next to totalSentences.) change totalSentences from 23 to 2 This should run and give you largely the same results as the original test (though it doesn't run as long in the final state because the longest sentence is shorter).
    8. This only parses the sentence turn left., because that's the only sentence we have defined. We're going to define the sentence step forward. We're going to do this in main after the call to addTurnSenteces.
    9. add
      parser.addSentence(['step','forward','.'])
      stepState = parser.addStateToNewStateOnWord(parser.startState,'step')
      stepForwardState = parser.addStateToNewStateOnWord(stepState,'forward')
      stepForwardFinalState = parser.addStateToNewStateOnWord(stepState,'.')
      You've now actually added the sentence, but it's not hooked into the test. If you call
      python testLangOneSentence.py 1 you'll try to parse the first sentence, but it's not defined, and you should get an error.
    10. In setupTestSentences, change turn right (two lines) to step forward. Now when you run the test (with the 1 argument), you should get a complete parse. (the 60-67 neurons should be firing at the end.)