Clutter-clearing robotic system

公开(公告)号:
CA3196451A1
公开(公告)日:
2022-06-02
申请号:
CA3196451
申请日:
2021-11-30
授权日:
-
受理局:
加拿大
专利类型:
发明申请
简单法律状态:
审中
法律状态/事件:
公开
IPC分类号:
A47L11/38 | A47L11/40 | B25J9/00 | B25J9/16
战略新兴产业分类:
智能制造装备产业
国民经济行业分类号:
C3855
当前申请(专利权)人:
CLUTTERBOT INC.
原始申请(专利权)人:
CLUTTERBOT INC.
当前申请(专利权)人地址:
2093 Philadelphia Pike #1348,CLAYMONT,DE,US
工商统一社会信用代码:
-
工商登记状态:
-
工商注册地址:
-
工商成立日期:
-
工商企业类型:
-
发明人:
HAMILTON, JUSTIN DAVID | WOLFE, KALEN FLETCHER | BANNISTER-SUTTON, JACK ALEXANDER | FRIZZELL, BRYDEN JAMES
代理机构:
-
代理人:
MOFFAT & CO.
摘要:
A robot is operated to navigate an environment using cameras and map the type, size and location of objects. The system determines the type, size and location of objects and classifies the objects for association with specific containers. For each category of object with a corresponding container, the robot chooses a specific object to pick up in that category, performs path planning and navigates to objects of the category, to either organize or pick up the objects. Actuated pusher arms move other objects out of the way and manipulates the target object onto the front bucket to be carried.
技术问题语段:
-
技术功效语段:
-
权利要求:
CLAIMS What is claimed is: 1. A method comprising: associating each of a plurality of object categories for objects in an environment with corresponding containers situated in the environment; activating a robot at a base station; navigating the robot around the environment using cameras to map the type, the size, and the location of the objects; for each object category: choosing one or more of the objects to pick up in the category; performing path planning from a current location of the robot to one or more of the objects to pick up; navigating to an adjacent point of one or more of the objects to pick up; actuating manipulators of the robot to move obstacles out of the way and manipulate the one or more objects onto a bucket at a front end of the robot; one or both of tilting or raising the bucket and actuating the manipulators to retain the objects in the bucket; navigating the robot adjacent to the corresponding container for the category; aligning a back end of the robot with a side of the corresponding container; and raising the bucket over the robot and toward the back end to deposit the objects in the corresponding container. 2. The method of claim 1, further comprising: operating the robot to organize the objects in the environment into clusters, where each cluster comprises only objects from one of the categories. 3. The method of claim 1, further comprising: operating at least one first arm to actuate the manipulators of the robot to move obstacles out of the way and manipulate the one or more objects onto the bucket; and operating at least one second arm to tilt or raise the bucket. 4. The method of claim 1, where each first arm is paired with a corresponding second arm, and further comprising: operating each pairing of first arm and second arm from a common originating pivot point. 5. The method of claim 1, wherein actuating the manipulators of the robot to move obstacles out of the way comprises actuating the manipulators to form a wedge in front of the bucket. 6. The method of claim 5, wherein actuating the manipulators to retain the objects in the bucket comprises actuating the manipulators to form a barrier in front of the bucket. 7. The method of claim 1, further comprising: operating a neural network to determine the type, size and location of the objects from images from the cameras. 8. The method of claim 1, further comprising: generating scale invariant keypoints within a decluttering area of the environment based on input from a left camera and a right camera; detecting locations of the objects in the decluttering area based on the input from the left camera and the right camera, thereby defining starting locations; classifying the objects into the categories; generating re-identification fingerprints for the objects, wherein the re- identification fingerprints are used to determine visual similarity between the objects; localizing the robot within the decluttering area based on input from at least one of the left camera, the right camera, light detecting and ranging (LIDAR) sensors, and inertial measurement unit (IMU) sensors, to determine a robot location; mapping the decluttering area to create a global area map including the scale invariant keypoints, the objects, and the starting locations; and re-identifying the objects based on at least one of the starting locations, the categories, and the re-identification fingerprints. 9. The method of claim 8, further comprising: assigning persistent unique identifiers to the objects; receiving a camera frame from an augmented reality robotic interface installed as an application on a mobile device; updating the global area map with the starting locations and the scale invariant keypoints using a camera frame to global area map transform based on the camera frame; and generating indicators for the objects, wherein indicators include one or more of next target, target order, dangerous, too big, breakable, messy, and blocking travel path. O. The method of claim 9, further comprising: transmitting the global area map and object details to the mobile device, wherein the object details include at least one of visual snapshots, the categories, the starting locations, the persistent unique identifiers, and the indicators of the objects; displaying the updated global area map, the objects, the starting locations, the scale invariant keypoints, and the object details on the mobile device using the augmented reality robotic interface; accepting inputs to the augmented reality robotic interface, wherein the inputs indicate object property overrides including change object category, put away next, don't put away, and modify user indicator; transmitting the object property overrides from the mobile device to the robot; and updating the global area map, the indicators, and the object details based on the object property overrides. 11. A robotic system comprising: a robot; a base station; a plurality of containers each associated with one or more object categories; a mobile application; and logic to: navigate the robot around an environment comprising a plurality of objects to map a type, a size, and a location of objects; for each of the categories: choose one or more of the objects to pick up in the category, perform path planning to the objects to pick up; navigate to points adjacent to each of the objects to pick up; actuate manipulators of the robot to move obstacles out of the way and push the objects to pick up onto a bucket at a front end of the robot; one or both of tilt and raise the bucket, and actuate the manipulators to retain the objects to pick up in the bucket; navigate the robot adjacent to the corresponding container for the category; align a back end of the robot with a side of the corresponding container; and raise the bucket over the robot and toward the back end to deposit the objects to pick up in the corresponding container. 12. The system of claim 11, further comprising logic to operate the robot to organize the objects in the environment into clusters, where each cluster comprises only objects from one of the categories. 13. The system of claim 11, wherein the robot comprises at least one first arm and at least one second arm, the system further comprising; logic to actuate the manipulators of the robot to move obstacles out of the way and push the one or more objects onto the bucket and operate at least one second arm to tilt or raise the bucket. 14. The system of claim 11, where each first arm is paired with a corresponding second arm, and each pairing of first arm and second arm have a common originating pivot point. 15. The system of claim 11, further comprising logic to actuate the manipulators of the robot to form a wedge in front of the bucket. 16. The system of claim 15, further comprising logic to actuate the manipulators to form a closed barrier in front of the bucket. 17. The system of claim 11, further comprising: a neural network configured to determine the type, size and location of the objects from images from the cameras. 18. The system of claim 11, further comprising logic to: generate scale invariant keypoints within a decluttering area of the environment based on input from a left camera and a right camera; detect locations of the objects in the decluttering area based on the input from the left camera and the right camera, thereby defining starting locations; classify the objects into the categories; generate re-identification fingerprints for the objects, wherein the re- identification fingerprints are used to determine visual similarity between the objects; localize the robot within the decluttering area to determine a robot location; and map the decluttering area to create a global area map including the scale invariant keypoints, the objects, and the starting locations. 19. The system of claim 18, further comprising logic to: re-identify the objects based on at least one of the starting locations, the categories, and the re-identification fingerprints. 20. The system of claim 19, further comprising logic to: classify the objects as one or more of dangerous, too big, breakable, and messy.
技术领域:
-
背景技术:
-
发明内容:
-
具体实施方式:
-
返回