Differences
This shows you the differences between two versions of the page.
Next revision | Previous revisionLast revisionBoth sides next revision | ||
tutorials:advanced:unreal [2019/04/15 13:59] – created. Moved MongoDB instructions from json-prolog tutorial hawkin | tutorials:advanced:unreal [2020/01/13 14:37] – [Usage and Code] added loading of cram_ROBOT_description as a step hawkin | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ==== Importing new episode data into MongoDB (Additional information) ==== | + | ====== |
+ | |||
+ | Tested under CRAM version v 0.7.0 | ||
+ | |||
+ | This tutorial will introduce you to the '' | ||
+ | |||
+ | ==== Idea ==== | ||
+ | We want to essentially teach a robot how to perform every day activities without having to dive deep into code, but rather simply show the robot what we want him to do using Virtual Reality. This would allow robots to learn from humans easily, since this way, the robot can acquire information about where the human was looking for things and where things were placed. Of course, this could also be hard coded, but that would take a lot more time and be prone to failure, since we as humans, often forget to describe very minor things which we automatically take for granted, but which can play a huge role in the success of a task for the robot. E.g. if the cooking pot is not in its designated area, we would automatically check the dishwasher or the sink area. This is something the robot would have to learn first. | ||
+ | |||
+ | The advantage of using Virtual Reality for this is also, that we can train the robot on all kinds of different kitchen setups, which can be build within a few minutes, instead of having to move around physical furniture. This would also allow for generalization of the acquired data and would add to the robustness of the pick and place tasks. | ||
+ | ==== Prerequisites ==== | ||
+ | This tutorial assumes that you've completed the [[tutorials: | ||
+ | |knowrob_robcog github]]. If it doesn' | ||
+ | The '' | ||
+ | |||
+ | === Roslaunch === | ||
+ | Launch a '' | ||
+ | <code bash> | ||
+ | $ roslaunch cram_knowrob_vr simulation.launch | ||
+ | $ roslisp_repl | ||
+ | </ | ||
+ | The '' | ||
+ | * **upload: | ||
+ | * **knowrob: | ||
+ | * **boxy: | ||
+ | |||
+ | ==== Usage and Code ==== | ||
+ | The following will describe what the different files and their functions do, when and how to use them and why they are needed. The explanation will follow the order files in the .asd file. It is separated into a usage and a files section. The usage section will focus on how to get everything | ||
+ | |||
+ | === Usage === | ||
+ | Here we will first explain on what needs to be done to get the robot to execute and perform a pick and place plan in the simulated bullet world. In the next paragraph, we will take a closer look at the individual source files and explain their function. | ||
+ | |||
+ | Before you load the package, navigate to the '' | ||
+ | |||
+ | Now you can load the '' | ||
+ | <code lisp> | ||
+ | CL-USER> | ||
+ | </ | ||
+ | |||
+ | If this is used in simulation and depending on if the PR2 or Boxy robot is supposed to be used, the respective description needs to be loaded first. | ||
+ | |||
+ | For PR2: | ||
+ | <code lisp> | ||
+ | CL-USER> | ||
+ | </ | ||
+ | |||
+ | For Boxy: | ||
+ | <code lisp> | ||
+ | CL-USER> | ||
+ | </ | ||
+ | |||
+ | |||
+ | To launch all the necessary initializations, | ||
+ | <code lisp> | ||
+ | CL-USER> (kvr:: | ||
+ | </ | ||
+ | |||
+ | This will create a lisp ros node, clean up the belief-state, | ||
+ | |||
+ | Now, let's execute the pick and place plan: | ||
+ | <code lisp> | ||
+ | CL-USER> (cram-urdf-projection: | ||
+ | </ | ||
+ | With this call we first say that we want to use the simulated bullet-world PR2 robot instead of the real one, and then we simply call the demo. The demo will read out the VR episode data and extract the positions of the objects that have been manipulated, | ||
+ | |||
+ | |||
+ | === Code === | ||
+ | == mesh-list.lisp == | ||
+ | Contains a list of all the meshes which we want to spawn based on their locations in the semantic map. Some of them are commented out, e.g. walls, lamps and the objects we interact with, in order to keep the bullet world neat and clean. In unreal however, the walls and lamps are being spawned. We simply currently don't need them in bullet. | ||
+ | |||
+ | == mapping-urdf-semantic.lisp == | ||
+ | Mapps the urdf kitchen (bullet world simulation environment) to the semantic map(virtual reality environment), | ||
+ | |||
+ | == init.lisp == | ||
+ | Contains all the needed initialization functions for the simulation environment, | ||
+ | |||
+ | == queries.lisp == | ||
+ | Contains some query wrappers so that they can be called as lisp functions, and also includes the queries which read out the data from the database e.g. the poses of the object, hand and head of the actor in the virtual reality. | ||
+ | |||
+ | == query-based-calculations.lisp == | ||
+ | Includes all transformation calculations to make the poses of the robot relative to the respective object, and the poses of the objects relative to the surfaces. Mostly works on lazy-lists of poses. | ||
+ | |||
+ | == designator-integration.lisp == | ||
+ | Integrates the pose calculations from the query-based-calculations into location designators. | ||
+ | |||
+ | == fetch-and-deliver-based-demo.lisp == | ||
+ | Sets up the plan for the demo with the respective action designator. Also includes logging functions. | ||
+ | |||
+ | == debugging-utils.lisp == | ||
+ | Contains a lot of debugging and helper functions which can also visualize all the calculated poses. | ||
+ | ==== Importing new episode data into MongoDB | ||
+ | In order for us to be able to query episode data for information, | ||
+ | The MongoDB contains all the poses of the objects, camera, hand and furniture of the episode. So the pose is read out from there. KnowRob takes care of everything else. It knows of all the object classes and instances used in the episodes. So in order to be able to use the data, we need to import it into MongoDB and KnowRob. | ||
+ | The following description only applies if the data has been recorded using the [[http:// | ||
+ | |||
+ | === MongoDB (quick with scripts)=== | ||
+ | There is a script which imports the episode data into the currently running MongoDB instance. Please see [[https:// | ||
+ | |||
+ | === MongoDB (manuel in-depth) === | ||
+ | This will explain how to import the episode data manually into MongoDB. This is essentially what the script in the description above does automatically. So if the script didn't work (please leave an issue on [[https:// | ||
If you record data in the Virtual Reality using [[http:// | If you record data in the Virtual Reality using [[http:// | ||
'' | '' | ||
Line 49: | Line 149: | ||
$ mongorestore -d Own-Episodes_set-clean-table -c RawData_qtzg RawData_qtzg.bson | $ mongorestore -d Own-Episodes_set-clean-table -c RawData_qtzg RawData_qtzg.bson | ||
</ | </ | ||
+ | |||
+ | === KnowRob === | ||
+ | KnowRob needs to be able to find the .owl and .bson files. It's important to have a directory for all the Episodes. You can either have a look or download these [[ftp:// | ||
+ | |||
+ | === Performance === | ||
+ | This step is also covered by the [[https:// | ||
+ | |||
+ | Depending on how many collections your database has, it can get slow when quering for information. One way to make it faster, is to include an index over timestamp for all collections. One way to add this, is to install [[https:// | ||
+ |