no way to compare when less than two revisions

Differences

This shows you the differences between two versions of the page.


Previous revision
Next revision
tutorials:advanced:unreal [2019/05/16 13:58] – Added Roslaunch, Prerequisites, Installation, Performance hawkin
Line 1: Line 1:
 +This tutorial will introduce you to the ''cram_knowrob_vr (short: kvr)'' package, which uses the data recorded in the Virtual Reality environment using [[http://robcog.org/|RobCog]], extracts information from them using [[http://www.knowrob.org/|KnowRob]] and executes the CRAM high-level-plans based on this data either on the real robot or in the CRAM [[tutorials:advanced:bullet_world|bullet world]].
 +
 +==== Prerequisites ====
 +This tutorial assumes that you've completed the [[tutorials:intermediate:json_prolog|Using JSON Prolog to communicate with KnowRob]] tutorial and therefore have ROS, CRAM, KnowRob and MongoDB installed. In order to be able to use the kvr package, a few specific changes have to be made. Within the ''knowrob_addons'' the ''knowrob_robcog'' package has to be replaced by this one [[https://github.com/robcog-iai/knowrob_robcog.git
 +|knowrob_robcog github]]. If it doesn't build because dependencies are missing, please install them. If it still doesn't build, you can try to pull this fork of the knowrob_addons [[https://github.com/hawkina/knowrob_addons|knowrob_addons github]] instead.
 +The ''knowrob_robcog'' package contains the VR-specific queries, which we need in order to be able to extract the information we need for the plans.
 +
 +=== Roslaunch ===
 +Launch a ''roscore'' first. Then, in a new terminal for each launch file, launch the bullet_world, json_prolog and roslisp_repl
 +<code bash>
 +    $ roslaunch cram_bullet_world_tutorial world.launch
 +    $ roslaunch json_prolog json_prolog.launch 
 +    $ roslisp_repl
 +</code>
 +The bullet world is needed for visualization. The json_prolog node allows us to access information in KnowRob from CRAM. 
 +
 +==== Initialization (within Emacs) ====
 +TODO
 +
 +
 +
 +
 +==== Importing new episode data into MongoDB and KnowRob(Additional information) ====
 +In order for us to be able to query data for information, we first need to import that data into KnowRob and MongoDB.
 +The MongoDB contains all the poses of the objects, camera, hand and furniture of the episode. So the pose is read out from there. KnowRob takes care of everything else. It knows of all the object classes and instances used in the episodes. So in order to be able to use the data, we need to import it into MongoDB and KnowRob.
 +
 +=== MongoDB ===
 +If you record data in the Virtual Reality using [[http://robcog.org/ | RobCog]], you essentially get the following files:
 +''RawData_ID.json'' <- this contains all the recorded events that happened, where everything was, who moved what where when...etc.
 +''SemanticMap.owl'' <- contains all information about the environment. Where are the tables spawned before they are manipulated? etc. \\
 +   EventData
 +     |-> EventData_ID.owl <- contains the times of the events. Which event occurred when, what events are possible...
 +     |-> Timeline_ID.html <- same as the above but as a visualized overview
 +  
 +Each episode gets a random ID generated, therefore replace ID here with whatever your data's ID is. In order to be able to access this data in KnowRob, we need to load it into our local MongoDB, since this is where all data is kept for KnowRob to access. Unfortunately, at the point in time where this tutorial is being written, MongoDB does not support the import of large files. The recorded episode can become fairly large if one performs multiple pick and place tasks. We can work around it, by splitting it up into multiple parts, which can be loaded into the MongoDB individually. For that, we need a little program called ''jq''. If you don't have it, you can install it from [[https://stedolan.github.io/jq/ | here]]. It allows us to manipulate json files from command line. Then you can do the following in the directory where your ''RawData_ID.json'' is:
 +
 +<code bash>
 +$ mkdir split
 +$ cd split
 +$ cat ../RawData_ID.json | jq -c -M '.' | split -l 2000
 +</code>
 +
 +We create a new directory called ''split'', cd into it, access the file with ''cat'' then we use ''jq'' to make the file more compact (''-c''), disable colorful output to the shell (''-M'') and format the output in a basic way (''.''). Then it's fed to ''split'' which creates many files, which are each about 2000 lines long, since that is a size MongoDB is comfortable with. 
 +After this, you should see many files in your split directory, named ''xaa'', ''xab'', ''xac''... you get the idea. The amount of files you get depends on the size of your original .json file. 
 +Now we can import the files onto the databse.
 +
 +<code bash>
 +$ mongoimport --db DB-NAME --collection COLLECTION-NAME  --file FILE-NAME
 +</code>
 +
 +example: 
 +<code bash>
 +$ mongoimport --db Own-Episodes_set-clean-table --collection RawData_cUCM  --file xaa
 +</code>
 +
 +I keep everything in one database, and name the collection according to the RawData_ID name, in order to not forget what is what. You can name it however you like. IF you consider importing all the files individually fairly tedious, you can write a script for it. If you do, let us know. Didn't get around to do that yet.
 +Now we can create a ''mongodump'':
 +
 +<code bash>
 +$ mongodump --db DB-NAME --collection COLLECTION-NAME
 +</code>
 +
 +<code bash>
 +$ mongodump --db Own-Episodes_set-clean-table --collection RawData_cUCM
 +</code>
 +Then you get a ''dump'' directory, which contains a ''RawData_ID.bson'' file and a ''RawData_ID.metadata.json''
 +
 +After this, you can just look at how the other episodes and their directories are structured, and create the directories for your data the same way. 
 +
 +Should you ever for some reason need to directly import a ''*.bson'' file, you can do so as well, using ''mongorestore'':
 +<code bash>
 +$ mongorestore -d DB-NAME -c COLLECTION-NAME FILE-NAME.bson
 +</code>
 +<code bash>
 +$ mongorestore -d Own-Episodes_set-clean-table -c RawData_qtzg RawData_qtzg.bson
 +</code>
 +
 +=== KnowRob ===
 +KnowRob needs to be able to find the .owl and .bson files. It's important to have a directory for all the Episodes. You can either have a look or download these [[ftp://open-ease-stor.informatik.uni-bremen.de/|episodes]] for reference. 
 +
 +=== Performance ===
 +Depending on how many collections your database has, it can get slow when quering for information. One way to make it faster, is to include an index over timestamp for all collections. One way to add this, is to install [[https://www.mongodb.com/products/compass|compass]] for mongodb. Launch it, and connect it to your database. The defautl settings should be fine so just click ''ok'' when it launches. Then go to your collection -> indexes -> create index. Call the new index ''timestamp'', select the field ''timestamp'' and set the type to ''1 (asc)'', click create. Repeat for all the collections. It will improve the query speed greatly.