103415834-BostonDynamicsRobot1.1910x1000

We have self-driving cars in the works. There are robots for farming advancing all the time. Recently, sophisticated irrigation management for the home has become a hot thing. Robotic vacuum cleaners, lawn mowing has been available and getting better.

The age of Robots is upon us.

These are not smart robots we can have conversations with or that want to conquer the world but they are smart enough to do things that we used to have people do.

Let’s face it, most people’s jobs are not that creative. They don’t require massive intellect. Most jobs are relatively rote. However, the real world is complicated any dealing with anything in the real world requires some intelligence.

Thus the latest AI technology is “good enough” that a lot of tasks that humans had to do in the past can now be done by a combination of vastly improved mechanical systems and the software we call AI.

This has been proved by the robots we’ve built today and these things are all evolving and becoming better and better. Google has proven that a computer properly programmed and with sensors can drive safely in the streets of Mountain View, Ca at less than 25mph getting people from A to B without a driver ever having to take over.

It seems inevitable that in 10 years with a concerted effort we can improve this technology so that cars can drive themselves at reasonable speed and get us from A to B without humans. Some 4 million jobs in the US are drivers. Limo’s, Taxis, most especially cargo transport in trucks employs a lot of people. In 10 years we can reasonably predict most of these jobs will be either gone or in jeopardy.

At home

So, what is the next step in this chain of devices for the home? First, it has to be something we do a lot of and is annoying. It has to be within the practical limits of our technology and not too expensive.

What about sex robots?

There is a lot of talk of sex robots. I don’t think this falls into the annoying category or something we do a lot of but for those frustrated by the complexity of dealing with the opposite sex it is extremely important.

I really have no interest in this but it does seem a lot of people may be interested. I have not really thought about how well that would work. I think something like that could be dangerous and frankly seems disturbing on many levels.

Let’s talk about something I think could be quite possible.

The Pick Up Robot

omnibot-hello-mip-two-wheel-robot-thumbIt is my belief as described in my future series here that one of the next things in robotic evolution is the pick-up robot. Here is more what I am talking about.

The kinds of things a robot like this could do:

1.. Find things that identify as pickable things that are out of place and transporting them to the appropriate place they belong safely.

2. Replacing light bulbs

3. Scrubbing surfaces

4. Cleaning up accidents

5. Keeping drinks or food full of appropriate stuff

6. Cooking basic meals

7. Keeping inventory and making lists of things needed

8. Folding sheets, towels and other things

9. Loading laundry and washing machine dryer, pulling out, ironing and placing away clothes

10. Close and open doors automatically, turn lights on and off

11. Speech recognition and basic commands

12, Manipulating devices such as oven controls, washers, stereo

13. Moving heavy things

Key pieces of technology and State

What do we need to do the things described above? Not as much as you might think and we have a lot of it.

Here are some technologies we need to do the Pick-Up Robot

1. Recognition of objects

A pickup robot would need to identify objects around it. It would need to see a glass and recognize it. Once it recognizes it then it can know where it goes and put it there with the other technology I describe below. The first thing is being able to recognize the things around it.

The new AI neural network technology is being tuned to do various recognition tasks. We know of voice recognition, text recognition and face recognition but there is also an effort to recognize objects.

Object recognition is challenging. There is a wide variety of objects in the world. It is crucial to keep working on this technology for car self-driving too. Cars must recognize people, bikes and objects that are in the way.

So, while I can’t say the technology is perfected we can constrain the problem. We can identify objects of interest to a pickup robot. It would be interested in cups, plates, silverware, clothing, bottles, labels on products. The kinds of things the robot would have to deal with. It doesn’t need to recognize rockets or street signs or possible high speed collisions. It does need to see that a glass is on a table and what a washing machine is.

You may have used a product called Vivino. I use it all the time. It is very good at recognizing the labels on wine bottles. While this is trivial in comparison to what I’m talking about the leap from labels to recognizing everyday objects is doable.

There is a database of 600,000 objects that a standards body keeps. Object recognition software is being trained to recognize these objects. There is a contest to see who can do better but this is a doable task.

When an object isn’t recognized generically it may have to be trained into the robot and told what it is. We don’t have that many objects in a house that it couldn’t be trained on the critical objects it needs to deal with.

It will be critical for the robot to recognize not only that this is a glass but that a glass may have contents that could spill and determining the level of fill the object has.

Once objects are recognized the robot should have an understanding of what objects go where. This is a matter of teaching the robot what the normal place for an object is. Also, the house has things like hampers, washing machines of various types and dryers, etc. Robots might be smart enough to wander a house and discover everything but frankly doing an intro to the robot to explain here is the dishwasher and it is model such and such would be easy.

An important task for a pickup robot would be inventory control.

A simple task for such a recognition capability is to take inventory. The Robot could go from shelf to shelf after being showed where they are and go into the refrigerator and look at product labels and recognize things. It would learn to count such things and where they go.

When a label isn’t available the robot can use a combination of general recognition software (for instance, this is a hot dog.) The robot can also use history based on likely things given what things the owner has had before.

I could imagine that if a robot can’t figure out what something is it could ask the homeowner and then realize if it sees that again, what it is.

Can recognition get this good?

The neural recognition software we use today (recurrent neural networks) develops abstractions like we think the human brain does. It starts recognizing smaller things first and then grasping how combinations of those smaller recognized things belong to larger things. It recognize letters, then words just like us. It recognizes eyes, lips and then faces. We build abstractions and the AI can do this to at least 2 levels.

We need the object recognition to at least be able to identify first and second order abstractions of features of objects. If it can’t figure out what the object is hopefully knowing some features of the object will be specific enough for the robot to learn what the object is by association of those features.

In the home the array of objects to be recognized is large but bounded. It is easy to have libraries of common objects and databases about the objects. For instance, knowing that a dishwasher is a GE model 123 the robot can easily know what controls the washer has, where they are and how to operate it.

The point is the robot doesn’t have to be perfect or as good as a human to be useful.

I believe we are 5-10 years away from much better and practical object recognition in general and this is definitely something that we can do.

We would also expect pick up robots would be good with understanding orders. So speech recognition is a crucial skill. Fortunately, the range of commands is limited but we could also pair the pickup robot with Alexa type technology so it could answer questions, call people, find people, etc.

2. Picking up objects and putting them down

Another area that is in serious development now is picking up objects. This may seem trivial but it isn’t. Robots don’t have the sensitive nervous system we have in our fingers. They don’t have the control of their joints and the range of power that our hands have. Picking up things is fraught with problems.

When robots have to pick up things today they will drop them frequently grasping them in the wrong way or the wrong pressure. They will break objects. They will simply fail to find anyway to pick something up reliably.

Google and a couple other firms are working on training robot arms to pick up things by trying different strategies and learning what works. They have dozens of robots working and working to create a database and AI that can understand how to pick up different objects.

Part of this is determining what the object is. From that you might be able to deduce the fragility of the object and the best way to pick it up but nothing beats trying so Google and others are basically having the robots try everything and learn what works using the same AI technology I referred to earlier.

We also need better sensors for fingers of robots and feedback mechanisms. We need hand design that needs to be improved as we learn what would work. Robots today in factories are designed for specific tasks and do specific rote tasks with parts designed to be manipulated by the robots. A pick up robot would need vastly more flexible limbs and hands.

I think we may be 5 years from reliable pick up capability and more sophisticated hands and software able to reliably pick up and put down objects in general.

3. Mapping and moving from point to point avoiding objects

a7a929ec3d9dc159a7af0594f9ca9f2a-1024x894Some of the vacuum robots learn the house configuration, location of objects and plan their vacuuming.

However, most robot vacuums aren’t that smart yet. In general these things operate by random movement. They avoid objects by bumping into them softly and changing course.

A pick up robot would have to have a detailed map of your home. It would have to know I have a towel that is dirty and that need to go into the hamper and I am here and here is where the hamper is. This is the path.

It would need to recognize objects in its path and either stop for humans or avoiding objects. A self-driving car can use GPS to determine the path to get from A to B but on the road when driving it generally doesn’t use a GPS for making decisions to avoid or simply follow the lane. GPS is not very accurate even with improvements it is no better than 10 feet accuracy.

GPS would not be accurate enough for mapping a home. A pick up robot would first have to train itself to learn where halls are, rooms, walls. You would have to tell it where it is allowed to go and where it isn’t.

Since furniture and other objects move around during the day it would still need more sophisticated avoidance than vacuum style avoidance. It would have to be more like the car but not perfect. It would need to recognize where it is by the outline of the room, the location of objects and to precisely determine its position at all times.

The robot should be designed so casual brushing or impact with the robot wouldn’t be hurtful to a child or frail human or the robot. Probably some cushion around the robot would be desirable probably made into something attractive.

The robot would also need to be aware of its condition and location in space. We call this proprioception in humans. It wouldn’t need to be as sophisticated but we wouldn’t want the robot operating with busted hands or broken limbs.

The robot would likely move by wheel. However, recent robots have been really good at 2 legged operation and maintaining balance even on terrain such as snow and slippery surfaces. While this technology is impressive I think this is closer to V3 of the robot. Additional flexible limbs will be very costly and add too much complexity that isn’t needed.

The exception to this is if the robot is expected to go up or down stairs. There may be a simpler way to enable the robots to do this other than general legs. I haven’t seen any designs for robots to go up and down stairs but it is likely this would be necessary.

4. Manipulation of objects

This is an area of more speculative development. A robot would have to be able to pickup and place objects as described above but it would also be expected to be able to be somewhat facile with objects.

This could be a v2 feature. At first pickup robots may simply transport objects. In v2 possibly they can use the objects or manipulate them.

Their are specific things we can train the robot to do. Folding linen, Folding clothes, aligning objects on a shelf or scrubbing surfaces. Opening doors and closing them. Moving chairs or other objects on the floor.

More sophisticated things like loading a dishwasher would be quite complicated and is probably v2 feature. Putting things back on shelves may be a challenge if it requires stacking or rearranging objects. Doing this manipulation without breaking things either that are being manipulated by the robot or adjacent while the robot is working would be crucial skill.

Once we’ve picked an object up manipulating it without breaking it is the next phase.

We are probably 5-10 years from this technology.

5. Battery management

This is relatively easy and we already have this for most mobile devices. The ability to sense when the battery is weak and either dock to recharge or swap batteries is quite reasonable technology available today.

6. Hands, Fingers, Sensors, Motors

We have a lot of work to do here. While we have robotic hands that can manipulate objects fairly well they are generally extremely clumsy and don’t have sufficient feedback for more sophisticated manipulation.

There is some technology to improve the tactile feedback robots have in detecting the presence of objects on their surface. What is needed is not only this and the resistance felt when trying to manipulate something which can be registered by noticing the movement associated with some level of pressure. If the object gives way we can assume it is soft. However, we have no way to determine the texture of an object, for instance, if it is smooth or rough.

What we need is more feedback that the object is responding to the pressure we are giving. I believe this is probably the hardest part of the pickup robot. Good hands will be as important as its recognition software.

Another hard part is when holding an object it is contains food or liquid not to spill it. If holding an object don’t bang the object into other things. This requires sensitive sensors and visual recognition of objects around it at all times including while changing or moving objects.

As the robot moves through the house carrying an object it is likely to bang into things. Awareness of the extent and likely ability to progress by a path may be hampered by the object. In some cases the robot might need some help eventually in V2 being able to reliably transport even heavy large objects.

7. Training the robot

One of the big startup activities might be to train the robot where things go and what things are.

Showing the robot where the washer and dryer are. Maybe pointing out what your preferred washing settings were. Presumably it would be easy enough for the robot to have a database of most washing machines and devices in general but there are a huge number and some training may be necessary on older devices or highly rare devices.

We would also want to tell the robot what tasks we want it to do, when. We would tell it preferred routes and probably arrange commands to say clean up mode or something. Take inventory. Run the machines at what times and when the machine is full enough to do a wash or whatever.

We can train the robot we want this chair here or there and this ottoman goes here. The robot could easily precisely position these articles when required. Even a V1 robot could nudge many objects into position even if it couldn’t lift or manipulate them otherwise.

What is the pickup robot experience like?

Imagine that such a robot exists. Does this suddenly improve marriages so that far fewer people divorce? Maybe. Does this mean that people could spend more time having fun instead of feeling exhausted all the time? Maybe.

The drudgery of housework is a complaint a lot of people have and many people hire maids to clean but most people can’t afford or don’t like the idea of having other people around their house constantly. Having an inanimate object to do drudgery tasks is likely to be a big hit.

Robots can learn over time

It’s important to understand that if a robot has a certain mobility and general purpose hands and sensors then it becomes a matter of programming and downloading new software to make the robot smarter and able to do more.

Thus a robot you buy may initially not be able to do many of the things described here but over time as the software improves we can download like the Tesla and have our robot improve dramatically what it can do.

The robot at first may simply be able to transport objects but eventually learn to place them where they belong rather than the counter. It may be able to assemble ingredients then eventually to process them. It may be able to understand some commands and then later more.

This is a powerful concept of changeability we haven’t seen exploited in home devices as much in the past. I believe such robots will be able to dramatically increase their functionality over time.

The kitchen and food in general

This is one of the big areas I think the robot could help with. It could keep an inventory of what we have. It could order things automatically.

A pickup robot could at first deliver things to a counter to be put into the washing machine and eventually put them in the dishwasher possibly scraping them clean before doing so.

It could cook some basic things, possibly chop vegetables or blend things. This is probably V2 of the robot but most of these things can be programmed once the manipulation capabilities are in.

The pickup robot could put things away. Undoubtedly you would have to train it where to put things but a sophisticated planning software would have to figure out how to place objects into the designated final destinations.

Avoiding destroying other objects during the processing of any object would be a crucial part of the robots programming as described above.

Eventually tasks such as assembling the objects and goods required to make a specific dinner or meal would be relatively easy leaving the person to the complicated task of preparing the perfect recipe.

“Get me some pepper” could be an easy instruction. Bring me the spices. Bring 2 cups of water to a boil couldn’t be that hard. I can see a steady progression possible in functionality that is useful immediately but progressing eventually to programming in recipes and having the robot able to do them. What if such a robot could download the recipe and make it as good as a chef?

What’s that worth?

The living room, Bedroom, Dining room etc, Parties, Entertaining

The living room is a central place of concern. Much of modern life seems to revolve around the living room. Finding objects that are out of place and need to be moved, deciding when they are no longer in use and should be picked up or refreshed would be important.

One of the tasks we’ve often thought of that would help is during entertainment being able to refresh people’s glasses, keep the serving tables loaded with things would be desirable. Frequently we are so busy during parties taking care of things like this we can’t enjoy the party.

To refresh a drink or resupply food would probably require a significant ability to manipulate objects and be aware of its environment. If I put my glass out pouring into it without spilling is probably quite a bit more complicated then pouring into a fixed object on a table. Such things are probably V2.

If we are expecting a robot to respond to commands from guests we probably need a hierarchy of commands that are acceptable for guests to give versus homeowner residents. Recognizing who is giving the command would need to be important. Facial recognition software has come along well.

Laundry

Laundry is certainly one of the least enjoyed tasks in the house. Doing laundry consists of:

1. Collecting the used clothing,

2. recognizing what it is and likely who it belongs to.

3. transpiring it to the hamper

4. Emptying hamper into washing machine hopefully avoiding articles which need different processing. It might be necessary to train robot which clothes have to be dry-cleaned. If the robot could find and read labels that might be awesome. Moving clothes to dryer and emptying dryer.

5. Operating washing machine, dryer.

6. Ironing or hanging, folding clothes

7. Placing into the correct drawer or closet for the right person

Each of these requires quite a bit of specialized behavior and knowledge. When you break down tasks like these you see that each of these tasks requires extensive training and programming. Nonetheless, like with abstractions you can build from basic subroutines to higher level functionality. A robot doesn’t have to do everything at first as described above.

Messes

A V2 feature that might be very handy is the ability to clean up messes. Occasionally we break glass, spill food or drop things resulting in “a mess.” I suspect eventually the robot could detect messes and clean them up.

Messes are different because they imply a complex combination of non-known objects. By definition the breakage or mess is not composed of things the robot would recognize so it would have to be trained to recognize a mess and to know what to do to pick up and clean different kinds of messes.

This seems quite complex surprisingly.

Self-Inspection

I strongly believe we want the robot to know its current state. It should be able to detect impending failures or outright failures and possibly repair itself. Maintenance of complex objects such as this may be beyond many homeowners. A modular design may simplify this at first but eventually I expect it would be self-repairing.

Summary

There is a lot more I could go into. Suffice it to say there seems to be a lot that current technology with a little extrapolation could do in this area.

I think such a robot wouldn’t be cheap. Even the simplest version would likely be several thousand dollars. Possibly this would eventually be tens of thousands of dollars.

I think this could be justified. When you consider the utility of such a robot it is quite valuable. We spend a lot of money on maids, cleanup and a lot of time. We spend a huge amount on food.

If such a device worked I don’t think this is equivalent of a $500 vacuum robot or lawnmower. We are talking vastly more utility and also the capability to do more over time. I am guessing people would be willing to spend quite a bit on such a robot even for limited functionality at first.

When you consider the possible tasks that are within reach of our technology it seems obvious this will happen. I predict in 5 years we will have numerous high quality robots around the home including the pick-up robot described above.

Advertisements