The State of Robot Fleet Management Systems from an intralogistics perspective. Part 3: Emerging Technologies
This is the third part of a publication “The State of Robot Fleet Management Systems from an intralogistics perspective: Technologies, Challenges, Trends”.
Part 1. Introduction and Literature overview
Part 2. Drivers&Enablers
The list of technologies below is a result of an extensive case study that addresses the question “what is the state of the art in the field?”.
1. Multi-robot systems
A multi-robot system or MRS for short in a general sense is two or more autonomous mobile robots that are able to communicate and coordinate their actions to achieve defined goals. This is central and the most important concept in RFMS since a robot fleet is actually MRS.
It takes over a routine job such as task scheduling, task allocation, and traffic management. And the most exciting — it allows being done jobs that require more capabilities than any individual robot has: imagine a mobile manipulator that is loading and unloading transport AMR, or two AMRs are carrying a load that too big or too heavy for one of them. By making robots able to collaborate with each other we expand the number of tasks and use-cases they can handle, while we keep the same amount of resources (robots). As a result, less human labor is needed on routine tasks (scheduling, allocation, and management) and actual tasks (moving, loading and unloading). Eliminating the human factor also entails higher efficiency, predictability, and repeatability.
MRS as a research topic shows significant growth: there were around 100 publications with the “multi-robot systems” keyword in 2000 comparing almost 1500 publications in 2019 in the IEEE Xplore Digital Library. It’s a very broad topic and readers are addressed to the following literature.
Foundational research by Arai et al. [34] describes advances in seven principle topic areas of Multi-Robot Systems that have been identified in 2002: Biological Inspirations; Communication; Architectures, task allocation, and control; Localization, mapping, and exploration; Object transport and manipulation; Motion coordination; and Reconfigurable robots.
Dudek et al. [35] in 2002 proposed a taxonomy of MRS that includes seven axes: Collective Size; Communication Range; Communication Topology; Communication Bandwidth; Collective reconfigurability; Processing Ability; and Collective Composition.
Farinelli et al. [36] in 2004 proposed a taxonomy for the classification of MRS focused on coordination. It is characterized by two groups of dimensions: Coordination Dimensions and System Dimensions. The first one consists of four features: cooperation, knowledge, coordination, organization, and the second one includes communication, team composition, system architecture, and team size.
Gautam and Mohan [37] in 2012 surveyed various interaction techniques in MRS which are important with respect to goal attainment and task completion.
One of the latest surveys has been done by Rizk et al [38] in 2019: authors studied recent contributions to the MRS field and highlighted the state of the art, including sub-fields such as task decomposition, coalition formation, task allocation, perception, and multi-agent planning and control.
A comprehensive study by Yan et al. [39] presents a systematic survey and analysis of the existing literature on multi-robot coordination. A clear in-depth view is made for such problems as Multi-robot Environment (Cooperative and Competitive), Inherent Problem (Resource Conflict), Coordination (Static and Dynamic), Communication (Explicit and Implicit), Task Planning and Motion Planning, Decision-making (Centralized and Decentralized).
An impressive subtopic of MRS is swarm behavior. Usually, it is a homogeneous group of robots, relatively simple, which are forming a swarm to accomplish a task. A swarm can be similar to a biological one, e.g. ants or bees.
In studies on swarm behavior, another topic can be seen often: emergent behavior. It’s a behavior of a group of robots that has more properties than any particular robot alone has, and was not declared explicitly by the developer but can be discovered by the group itself.
2. Cloud robotics platforms
Cloud robotics platforms like any other cloud software can reduce costs (i.e. by easier integration and deployment), improve scalability, flexibility (both for vendors in terms of updating and maintaining and for customers in terms of features and service improvements), accessibility, and reliability [40].
Existing platforms such as AWS RoboMaker [41], ROCOS [42], Rapyuta [43], and others offer fleet-related features like monitoring, software updating, control, and teleoperation. Activities such as task scheduling and allocation also can be done in the cloud, but it has to be noted the more data you have (from robots and tasks) the more bandwidth you need.
Also, cloud robotics can provide individual robot-related features like computation for perception and computer vision, localization and motion planning, manipulation, and grasping. Such capabilities allow changing robot configuration, for instance in a way to install less performing onboard computers that lead to the cost reduction of the whole fleet.
Finally, cloud-based solutions drive new business models (e.g. Robot-as-a-service — RaaS) and allow vendors to improve products and services by getting insights from gathered data.
3. Augmented reality for interaction
According to the taxonomy of mixed reality proposed by Milgram and Kishino [44] the two key concepts can be highlighted there: 1) “Augmented Reality (AR) — any case in which an otherwise real environment is “augmented” by means of virtual (computer graphic) objects”; and 2) “Virtual Reality (VR) environment is one in which the participant-observer is totally immersed in, and able to interact with, a completely synthetic world.”
AR and VR can be used in many use-cases with different configurations. Below are some examples.
Spatial AR for fleet supervising
Even though the automation level is getting higher so less and less human attention is required still in some situations a supervisor has to look at the factory floor: e.g. for visual control or occurring issues solving.
In case the factory floor can be seen in a direct line of sight (from a control room separated by glass, for instance), visually perceived information can be augmented with the status and state of robots, its position during occlusion, paths, intents, and so on. In this case, all the desired information regarding robots can be perceived naturally from one source and not from several different displays or applications.
This example can be implemented using holographic glass such as the HOPS projection glass [45], a transparent OLED display, or a projector that projects an image to a semi-transparent screen in a way the factory floor behind remains visible for the observer.
Projected AR for human-robot interaction
Human-robot interaction is a large and extremely important research topic, that actively exploits AR.
In this example, a typical configuration is a projector is mounted on a robot to communicate its intent or any other information onto a floor surface. This way of interaction is intuitive and allows a group of people to look at the same augmented object which can be an advantage comparing to head-mounted displays.
Head-mounted AR display for robot maintenance and repair
Entire fleet health depends on each particular robot’s condition. The more effective the process of finding and fixing issues, the more performing a fleet. Wearable devices such as Microsoft HoloLens, Magic Leap, and Google Glass can be used to add a digital augmented layer to the user’s vision. It can show hints and instructions during such tasks as robot inspection, maintenance, and repair, making the process more efficient, intuitive, and handy. [48][49][50].
4. Indoor positioning and robot localization
Determining and tracking the position and orientation of robots are crucial tasks for path planning and traffic management. High precision is necessary to prevent collisions of robots with each other and with the environment. There are already many approaches and technologies studied in A Survey of Indoor Localization Systems and Technologies [52]. We just mention a group of technologies that require extra physical infrastructure (Wi-Fi, Bluetooth, ZigBee, RFID) and standard sensing technologies for robot localization (lidars, cameras, ultrasound and infrared sensors, IMUs). New approaches such as sensor fusion have become more and more popular, increasing localization and positioning by handling data from several sensors at the same time.
5. API and integrations
Even though a robot fleet management system can act as a stand-alone application, often there are many other systems in the operational environment. So that integration is required for the software level. In the meantime, API brings a possibility to build custom solutions and cover more use-cases while integrations make RFMSs easier to deploy and save resources on embedding that into the facility.
As of the emerging development of IoT, more devices such as doors, elevators, and industrial machines become connectivity ready. For this purpose, compatibility with communication protocols and interfaces can be useful among the RFMSs: Open Platform Communications (OPC) and OPC Unified Architecture (OPC-UA), Data Distribution Service (DDS), Message Queuing Telemetry Transport (MQTT). One more example is the interface designed for communication between AMR and a master control such as Verband der Automobilindustrie (VDA 5050) [53] and the open interface proposed by Quadrini et al [54].
Speaking about other software integrations, there are systems such as Enterprise resource planning (ERP), Manufacturing Execution Systems (MES), Warehouse Management, Execution, or Control Systems (WMS, WES, WCS). For such systems, HTTPS REST and Websocket protocols are usually used.
6. Graphical User Interface (GUI)
Despite in ideal world, all the jobs should be done autonomously so that the human supervisor does not need any interface to control the implementation, today GUI still is a part of a web or desktop application of RFMS. It helps users supervise and control a fleet in a more intuitive way comparing to command prompt and intermediate exchange files with different extensions such as CSV or JSON.
Several modules can be highlighted here.
Jobs designer and tracker
Jobs can be created by an RMFS automatically triggered by an external system (e.g. ERP, MES) or manually by the user. GUI is a common tool for the second case.
Job designer can look like a form with any kind of typical elements (edit, scroll, dropdown, etc). Or contain standard elements as building blocks, still with the possibility to adjust parameters.
Jobs tracker usually looks like a list or a table with jobs parameters such as id, type, status, assigned vehicle id, priority.
Vehicles tracker
Usually, it looks like vehicle icons (or pictograms) dynamically mapped onto a facility layout. It is supplemented by a list or a table with vehicles’ parameters such as id, status, and battery level. Sometimes it is also possible to look up detailed vehicle data in a separated form.
Facility layout labeling
Many RFMS provide tools for labeling and annotating factory floor layouts to determine navigation in different areas: freeways, paths, waypoints, lanes, restricted areas. Also, these tools allow users to mark places for docking, charging, and so on.
7. Teleoperation and remote control
Back in the days, the more complex a task was, the less autonomy and more teleoperation were used in robotics systems [55]. Though, recently we can observe more and more autonomous behaviors even within complex missions such as that in the DARPA Subterranean Challenge [56], where groups of robots perform autonomous exploration in a tough unknown environment.
Warehouses and factories seem like determined environments so low-level teleoperation capabilities with direct access to actuators and sensors may not be required, however, high-level capabilities like sending commands to move to a certain location often appear. Furthermore, modern RFMSs provide other remote control capabilities: switch robot modes (in/out of service), change configuration parameters (velocity, obstacle inflation parameters), or deploy updates.
8. Visualization and simulation
Initial integration of a robot fleet into a company infrastructure and further changes can be expensive. Errors and mistakes in there can cause money losses, workflow delays, and even safety issues.
For initial modeling and evaluating the feasibility of the deployment and changes, simulators with 2D and 3D visualization can be used. Simulators have different capabilities for graphics, physics, and logic for job allocation, path planning, and traffic management. Modeling (and simulating) a fleet and its behavior in a digital environment is a way to validate hypotheses, conduct experiments, and get an impression of the consequences after changes.
References
[32] Amanatiadis, A., Henschel, C., Birkicht, B., Andel, B., Charalampous, K., Kostavelis, I., … & Gasteratos, A. (2015, May). Avert: An autonomous multi-robot system for vehicle extraction and transportation. In 2015 IEEE International Conference on Robotics and Automation (ICRA) (pp. 1662–1669). IEEE.
[33] Alonso-Mora, J., Knepper, R., Siegwart, R., & Rus, D. (2015, May). Local motion planning for collaborative multi-robot manipulation of deformable objects. In 2015 IEEE international conference on robotics and automation (ICRA) (pp. 5495–5502). IEEE.
[34] Arai, T., Pagello, E., & Parker, L. E. (2002). Advances in multi-robot systems. IEEE Transactions on robotics and automation, 18(5), 655–661.
[35] Dudek, G., Jenkin, M., & Milios, E. (2002). A taxonomy of multirobot systems. Robot teams: From diversity to polymorphism, 3–22.
[36] Farinelli, A., Iocchi, L., & Nardi, D. (2004). Multirobot systems: a classification focused on coordination. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 34(5), 2015–2028.
[37] Gautam, A., & Mohan, S. (2012, August). A review of research in multi-robot systems. In 2012 IEEE 7th international conference on industrial and information systems (ICIIS) (pp. 1–5). IEEE.
[38] Rizk, Y., Awad, M., & Tunstel, E. W. (2019). Cooperative heterogeneous multi-robot systems: A survey. ACM Computing Surveys (CSUR), 52(2), 1–31.
[39] Yan, Z., Jouandeau, N., & Cherif, A. A. (2013). A survey and analysis of multi-robot coordination. International Journal of Advanced Robotic Systems, 10(12), 399.
[40] Dar, A. A. (2018). Cloud Computing-Positive Impacts and Challenges in Business Perspective. Journal of Computer Science & Systems Biology, 12(1), 15–18.
[41] https://aws.amazon.com/robomaker/
[42] https://www.rocos.io/
[43] https://www.rapyuta-robotics.com/
[44] Milgram, P., & Kishino, F. (1994). A taxonomy of mixed reality visual displays. IEICE TRANSACTIONS on Information and Systems, 77(12), 1321–1329.
[45] https://www.visionoptics.de/en/portfolio_en/projection/
[46] Verbeij, M. Explorative study for application of spatial augmented reality on factory automated ground vehicles. Master thesis, 2020.
[47] Fernandez-Carmona, M. (2020). Report on human-robot spatial interaction and mutual communication of navigation intent. Technical report, ILIAD Project.
[48] Palmarini, R., Erkoyuncu, J. A., Roy, R., & Torabmostaedi, H. (2018). A systematic review of augmented reality applications in maintenance. Robotics and Computer-Integrated Manufacturing, 49, 215–228.
[49] Siew, C. Y., Nee, A. Y. C., & Ong, S. K. (2019, July). Improving Maintenance Efficiency with an Adaptive AR-assisted Maintenance System. In Proceedings of the 2019 4th International Conference on Robotics, Control and Automation (pp. 74–78).
[50] Erkoyuncu, J. A., del Amo, I. F., Dalle Mura, M., Roy, R., & Dini, G. (2017). Improving efficiency of industrial maintenance with context aware adaptive authoring in augmented reality. Cirp Annals, 66(1), 465–468.
[51] Puljiz, D. Prototype demonstrator of AR based interaction concepts. SafeLog project deliverable, 2018.
[52] Zafari, F., Gkelias, A., & Leung, K. K. (2019). A survey of indoor localization systems and technologies. IEEE Communications Surveys & Tutorials, 21(3), 2568–2599.
[53] https://www.vda.de/en/services/Publications/vda-5050-v-1.1.-agv-communication-interface.html
[54] Quadrini, W., Negri, E., & Fumagalli, L. (2020). Open interfaces for connecting automated guided vehicles to a fleet management system. Procedia Manufacturing, 42, 406–413.
[55] Kievit-Kylar, B., Schermerhorn, P., & Scheutz, M. (2012). From Teleoperation to Autonomy:“Autonomizing” Non-Autonomous Robots. In Proceedings of the 12th International Conference on Artificial Intelligence (Vol. 10).
[56] https://www.subtchallenge.com/