I have visited the Industry Academia Cooperation Fair on last Thursday and Friday. I had a good experience there and improved a lot of knowledge from it. These are some pictures in this exhibition.
1. Research about ROS
I think the lecture TORK(Tokyo Opensource Robotics Kyokai Association) provided is very interesting and significant, which talked about ROS(Robot Operating System) in details.
1.1 Introduction of ROS
Robot Operating System (ROS) is a collection of software frameworks for robot software development, which provides standard OS services, such as hardware abstraction, low-level device control, implementation of commonly used functionality, message-passing between processes, and package management. Running sets of ROS-based processes are represented in a graph architecture where processing takes place in nodes that may receive, post and multiplex sensor, control, state, planning, actuator and other messages. Despite the importance of reactivity and low latency in robot control, ROS, itself, is not a Real time OS, though it is possible to integrate ROS with real time code.
1.2 ROS main Functions
ROS client library implementations such as roscpp, rospy, and roslisp; and packages containing application related code which uses one or more ROS client libraries. Both the language-independent tools and the main client libraries (C++, Python, and LISP) are released under the terms of the BSD license, and as such are open source software and free for both commercial and research use. The majority of other packages are licensed under a variety of open source licenses. These other packages implement commonly used functionality and applications such as hardware drivers, robot models, data types, planning, perception, simultaneous localization and mapping, simulation tools, and other algorithms. ROS’s main functions include:
Perception needs the tools such as: OpenCV, pcl, and OpenNI. OpenCV (Open Source Computer Vision) is a library of programming functions mainly aimed at real-time computer vision, developed by Intel Russia research center in Nizhny Novgorod, and now supported by Willow Garage and Itseez. The Point Cloud Library (PCL) is a standalone, large scale, open project for 2D/3D image and point cloud processing. OpenNI (Open Natural Interaction) is an industry-led, non-profit organization focused on certifying and improving interoperability of natural user interface and organic user interface for natural interaction devices, applications that use those devices and middleware that facilitates access and use of such devices.
The Navigation Stack is fairly simple on a conceptual level. It takes in information from odometry and sensor streams and outputs velocity commands to send to a mobile base. Use of the Navigation Stack on an arbitrary robot, however, is a bit more complicated. As a pre-requisite for navigation stack use, the robot must be running ROS, have a tf transform tree in place, and publish sensor data using the correct ROS Message types. Also, the Navigation Stack needs to be configured for the shape and dynamics of a robot to perform at a high level. To help with this process, this manual is meant to serve as a guide to typical Navigation Stack set-up and configuration.
The main goals of the manipulation pipeline are: to pick up and place both known objects (recognized from a database) and unknown objects to perform these operations collision-free, in a largely unstructured environment
1.2.4 An example motion
The motion planner reasons about collision in a binary fashion – that is, a state that does not bring the robot in collision with the environment is considered feasible, regardless of how close the robot is to the obstacles. As a result, the planner might compute paths that bring the arm very close to the obstacles. To avoid collisions due to sensor errors or mis-calibration, we pad the robot model used for collision detection by a given amount (normally 2cm). The motion of the arm for executing a grasp is:
Figure1. Interpolated IK path from pre-grasp to grasp planned for a grasp point of an unknown object Figure2. A path to get the arm to the pre-grasp position has been planned using the motion planner and executed. Figure3. The interpolated IK path from pre-grasp to grasp has been executed. Figure4. The object has been lifted. Note that the object Collision Model has been attached to the gripper.
Figure.1 Figure.2 Figure.3 Figure.4
1.3 ROS implemented in Japan
This is an example of the application of ROS in Japan: VS Series VS-050/060(6- Axis Robots) from DENSO.
2. Related Study of ROS Myself
To learn ROS better, I installed it in my PC. Due to better support to Ubuntu, so I install the ROS in my Ubuntu 14.04. I install it as the tuition in the official website and success. When I input the command line in the terminal:
The information of the ROS installed will show like this:
This will provide a flexible framework for writing robot software.
2.1 Research about Nao Robot and NaoQi System
2.1.1 Introduction of Nao
NAO is a little character with a unique combination of hardware and software: he consists of sensors, motors and software driven by NAOqi, our dedicated operating system. It gets its magic from its programming and animation.
Movement libraries are available through graphics tools such as Choregraphe and other advanced programming software. They allow users to create elaborate behaviors, access the data acquired by the sensors, and control the robot… NAO has:
25 degrees of freedom, for movement
Two cameras, to see its surroundings
An inertial measurement unit, which lets him know whether he is upright or sitting down
Touch sensors to detect your touch
Four directional microphones, so he can hear you
This combination of technologies (and many others) gives NAO the ability to detect its surroundings.
2.1.2 Simulation of This Robot in Mac OS
We can simulate the robot even we do not have a real robot (it is very expensive about 7000 $). After I install the development suit of it, this robot can be simulated by programing. NAOqi Framework provides a lot of APIs to us for programing. We can use Python and call the APIs to complete the functions.
Take a simple program as an example:
I call this function to let the robot to say something. self.fpc = ALProxy(‘ALTextToSpeech’)
And I use this function to say string: “Hello Fang-san” self.fpc.say(“Hello Fang-san”)
This line uses the object tts to send an instruction to the NAOqi module. self.fpc is the object we use.
say() is the method.
“Hello Fang-san” is the parameter.
As same method, if we use the different function, such as Motion function ALMotion, program like this: self.fpc = ALProxy(‘ALMotion’)
We can see that the robot get a “rest”.
3. Some other Studies
These are some auxiliary/substitution systems I had learnt from the exhibition and lecture of YASKAWA.