Laurie Cavanaugh, VP of Business Development from E Tech Group, and Nick Hobbs, Senior Regional Sales Manager from Apera AI, recently held a webinar covering the benefits of integrating Apera AI’s 4D vision system into your automation project. There were some great follow-up questions asked by our attendees and we wanted to make sure we shared the As to your Qs!
How long does it take for the AI to go through a million learning cycles?
Nick: At this time, once all required information is submitted, you’ll receive your vision program back within a week to two weeks.
How many different objects can the system handle at one time? What if they have a mix of different finishes?
Nick: So right now, the PC alone can handle learning up to 30 different parts. However, other users that have their own internal network, which I know E Tech Group can help you to set up, have hundreds of parts and it takes a mere 20 seconds to simultaneously learn if they’re not part of those original 30.
Now as far as different finishes, I have been working with foundries and casting plants where they have metal billets. We can pick clean or corroded, as they have varying finishes. We don’t care. We really don’t mind picking different colors, things of that nature. However, we’re not looking for color at that point, so you just need to let us know what we need and don’t need to look for. But in that sense, it won’t affect us.
Laurie: There are some things too that we take into consideration. We do have our own internal industrial IT group that has helped numerous companies’ IT groups set up the proper OT network. Between the hardware and software, servers, and systems, we can help ensure that those needs are met. Just as Nick was pointing out, it’s a matter of calling up the right pipelines and thinking about the operation and process, then deciding. Is it a decentralized kind of architecture or do we centralize? It’s not a limitation of the Apera AI system, it’s just a matter of the delivery mechanism to their engine and how it’s distributed.
How long does it take to evaluate a new project?
Nick: From the Apera side, that takes mere hours. You would Just need to let us know what your application is. We like setting up a meeting with you, typically virtually, because there’s a few questions we need to ask. Such as cycle time accuracies. What does it look like when we are picking? What does it look like when we’re placing? Things of that nature. If you can give us an hour of your time, we’d love to run through that and get you in touch with E Tech Group to finish off your project.
Laurie: I think that ties in well with trying to get the learnings and pipeline built. We want to look holistically at the problem we’re trying to solve and what could potentially interfere with that. I’ll call it kind of a point in time where we’re relying on vision. In one scenario, we had a client where in an upstream application they might be applying grease or oils. If there’s potential to create a downstream issue with resolution or being able to see the system, let’s help them mitigate that upstream problem. Maybe a more automated application of that grease or oil so we can control that look and feel downstream. Likewise, if after vision, what are we going to do? What’s the determination once we get the response back from the Apera AI system? Does it need to integrate with a PLC program, with a robot or other downstream applications tying everything together? That’s why we like to go to site. Just so we can really understand the business and understand what’s going on and what vision is going to help them solve.
Can you share any non-robotics use cases? Discrete product appearance, quality checks? You know how many images a minute can you handle?
Nick: Our processing time from vision capture through process is less than a second. So, you can get more than 60 captures a minute. We are looking at a few different applications, one pill bottle recognition. We’re finding different bottle sizes, capsizes, verifying that the correct amount and sizes are there. We’re doing an application with a wheel hub assembly where we need to ensure the spacer, the nut, and then the clamping mechanism are all there. The difficulty is that there’s heavy amounts of grease on the product and it’s not transparent. Here, we’re only seeing a sliver of the actual piece in some instances. There are other cases out there where we’re working outside on railroads, looking for rail ties and dropping railway components. We’re also working on projects in the solar fields or with mining equipment. This has the potential to be impacted by ambient lighting with other systems but will not impact the Apera AI vision system. Those are just some of the non-robotic applications.
Do you need a site visit to get started?
Nick: For Apera AI, we do not. We can just do a quick Teams meeting, much like we’re on now. We can run through your application as long as you have pictures or video of the parts and a good understanding of the application so that we can determine whether or not our systems are fit for it.
Laurie: I think it’s not necessary, but it is always interesting. I love the fact that the lighting doesn’t matter. But we do still want to figure out the size of the field of view. Some variables may impact criteria that are dimensionally, physically, or spatially oriented. So, we want to see the environment not because we’re concerned, but to understand equipment requirements. Then also look at that upstream, downstream, what’s happening prior to and after what the vision technology is going to be looking at. But once we’ve made it past that point of feasibility and begin examining true application, installation, and all other considerations, it benefits us to be on site.
Do you support segmentation at a pixel level? describe the computing platform, CPU, GPU, etc..
Nick: There’s nothing special about the products that we use. They’re just off the shelf. You can either have a Dell or Lenovo PC as specified. But our PC does have a very, very fast CPU and runs very hot. That’s why we have a desktop version of this currently and we’re getting ready to release a smaller version. We used to have more of a rackmount version, it sounded similar to a jet engine because the fans had to act so fast, and they had to be so small.
Let’s say I am looking for a foreign object, which can exist anywhere on the image, and that foreign object is of low contrast, white on white, Can your system handle that?
Nick: Yeah, absolutely. We do black on black, white on white, clear objects, chrome objects. That’s what we specialize in. Our system helps make these automation projects possible. Whereas in other systems, you have to have lighting tricks. None of that is an issue with our system. As for finding defects, you didn’t ask us, but this does bring up that issue. Currently our system does not look for defects. If it is a known object and we have its CAD file, we can look for it. If it’s not supposed to be there, we can tell you it’s there. However, I can’t look for scratches, dents, disfiguration, things of that nature.
Instead of a desktop PC, have you considered using an IPC that can be mounted on the machine?
Nick: The version that we’re coming out with has an APC look and feel. It is not APC rated. I would suggest that it goes into a cabinet. It’s just not something that we can get on an APC yet because of the Nvidia GPU.
You showed a video of a tier one automotive supplier doing assembly. What other applications of this tech have automotive suppliers validated?
Nick: The reason a lot of people haven’t heard about us is because we just hit the American market last year and from there, went to Automate. We met a lot of automotive suppliers at that show, and they identified a lot of issues right now with having an available workforce. So right now, a lot of automotive companies are replacing those jobs with our system and placing those people into other jobs. By using our system with a cobot or an industrial robot, you can replace a person’s job at that point in time because our system is so fast, they can work at the same pace an operator would. So doing assemblies, kitting, CNC and machine tending are just some of the applications where replacing those jobs and allowing you to put people where they are better suited can get you a higher return. We also saw a very large automotive company use our system for depalletization. They had 26 different parts SKUs coming in from suppliers and we were depalletizing all 26 of those. We have a very large consumer goods company using our system to look at hundreds of different parts SKUs as well.
Laurie: I’ve had a lot of long history in automotive, and if you take automotive supply chain looking at it more broadly, they are working to semi-automate material conveyance by adding the vision component that helps in inspection and identifying items that may be out of spec or should not move forward. This isn’t just limited to automotive, but any discrete manufacturing as well.
Is there a specific camera type model brand that has to be used?
Nick: We suggest only using the models and brands that we have tested. There is not a specific brand that has to be used, but we do need to go through the testing process. Currently, it does come with Basler cameras. They’re 12-megapixel grayscale stereo vision cameras. We also use Allied vision. That does not mean that this same type of camera from some other brand couldn’t be used.
Is it 1:1 PC With the robot solution, you set up to two robots and five apps and cannot detect scratches. Are those the only limits identified?
Nick: Two-part answer here. With one PC, you can control up to two robots as long as your application allows. If I’m doing a one second cycle time with your robot, it’s just that robot, you took all the computing power. But if you’re doing something that’s pretty normal, 4 or 5 seconds, you can have up to two robots and up to five camera sets total. There are a couple more limitations outside of not being able to detect scratches. We can move with moving parts, but we will want to make sure that if it’s dealing with a robot, it’s pretty stationary. If it’s not moving with the robot, we have other ways around it. Just let us know what that application looks like, and we’ll let you know whether or not it’s a fit. The parts we’re picking need some form of rigidity. So plastic bags that can manipulate an infinite number of ways are not good to pick. I cannot train for infinite; I can train for finite. Same thing with strings and things of that nature. Wires and cabling that are pretty rigid, we can do. We did some systems with bags of water that were pretty filled, chip bags we’ve done, but again it has to have some form of rigidity.
How many cameras per PC?
Nick: Per PC, 10 cameras or 5 pairs of cameras can be used. Our 4D vision system will not allow for any individually running cameras.
Where does the functionality of the Apera AI system and other robotic or PLC programing begin?
Nick: This is what E Tech Group is going to be able to help you out with. In the holistic operation, all we are concerned with is instead of programing the point in the path, we give it to you. However, if safety takes over, all hands are off. We don’t touch safety and will stop everything. We’re just giving you the coordinates and the path. Everything else is still controlled by the robot or the PLC that you currently use today.
Brad: We started off with just the robot and camera system. It’s extremely straightforward. You get your data from the vision system; you program the robot to go to that point and then you tell the robot where you want the object to go. From there, we added an HDMI and a Allen Bradley PCL to run just a couple IO back and forth to say when it’s in this position, we want this program to run in the background to count how many balls we have left, to see how many times we’ve cycled, to see when we need to go and clean up our collect bin. There hasn’t been a single piece that has been hard to integrate with the Apera AI system. Like Nick said, they give you the data, you get to choose what you do with it.
How is the software licensed?
Nick: The way that it works right now is that it comes with a three-year license, so it’s completely bundled. If at the end of three years, you choose not to move forward with the new license, it will still continue to operate as it was for the past three years. There’s absolutely no need to do that unless you want added support from our side and or access to our AI. If you want to train more objects, you want to add or subtract objects from that system, you will need access to that system.
Do you have a list of co bot manufacturers that you are compatible with?
Nick: It won’t be just cobots, it is industrial as well. We have up to about 12 different brands, all of your major brands that you can consider. ABB Fanuc, Kuka, Doosan, Yaskawa, Mitsubishi, Staubli, and more. It will be compatible with almost all. If you have projects and it needs to be written, it is just a couple week process.
Our previous inspection projects always needed reference images for training. Is that the case here?
Nick: So as far as just any project, we need CAD files. If we are looking for, let’s say part presence, we need the actual CAD files of the parts and the stack in that manner. For that pill bottle application, we needed to see all different variations of, you know, of the lids and the bottle sizes and things of that nature. It’s not just the CAD, but cell phone pictures help tremendously. We like to see the finishes and things like that so that our eye can train.