Quantcast
Channel: » Solution Engine
Viewing all articles
Browse latest Browse all 2

Responsible M2M

$
0
0

About a year ago, maybe two years now, we had a large manufacturing customer that we were working with to implement MTConnect on their production floor. Basically they had 20 five-axis machine tools running creating aircraft parts and they wanted to be able to get data off of those machines and “put it in The Cloud.” Well, first off I’ve talked about how much I dislike the term “The Cloud” so we had to clarify that. Turns out they meant “in a SQL Server database on a local server.”

MTConnect is a machine tool (hence the “MT” part) standard that we leveraged and heavily extended for use in our Solution Family products. Painting it with a broad brush, what it means is that all data from each machine tool – axis positions, part program information, door switches, coolant temperature, run hours, basically the kitchen sink – can be made available through a REST service running either on the machine tool or on a device connected to it.

They wanted to take that data and put it into SQL Server so their engineering group could run analytics on the data. Maybe they wanted to look at part times, energy consumption, tool path length, whatever. They actually weren’t fully sure what they wanted to do with the data, they just knew that “in The Cloud” is where everyone said it should be, so the commandment came down that that’s where the data would go.

Ugh. The conversation went something like this.

“So you want all of the data from each machine tool to go to the server?”

“Yes. Absolutely.”

“You know that there are 6 continually moving axes on those machines, right? And a constantly changing part program.”

“Of course. That’s the data we want.”

“You are aware that that’s *a lot* of data, right?”

“Yes. We want it.”

“You’re sure about this?”

“Yes, we’re sure. Send the data to The Cloud.”

So we set up a mesh of Solution Engines to publish *all* of the data from *all* of the machines to their local server. We turned on the shop floor. And roughly 20 seconds later the network crashed. This was a large, well built, very fast, hard-wired network. There was a lot of available bandwidth. But we were generating more than a lot of data, and the thing puked, and puked fast.

So what’s the lesson here? That you can always generate more data out at the edge of your system than the infrastructure is capable of carrying. If you’re implementing the system for yourself, trying to transfer all of the data is a problem, but if you’re implementing it for a customer, trying to transfer all of it is irresponsible. We did it in a closed system that was just for test, knowing what the result would be and that it would be non-critical (they simply turned off data broadcasting and everything went back to normal), but we had to show the customer the problem. They simply wouldn’t be told.

We need to do this thing, this M2M, IoT, Intelligent Device Systems or whatever you want to call it responsibly. Responsible M2M means understanding the system. It means using Edge Analytics, or rules running out at the data collection nodes, to do data collection, aggregation and filtering. You cannot push all of the data into remote storage, no matter how badly you or your customer might think it’s what needs to happen.

But that’s fine. Most of the time you don’t need all of the data anyway, and if, somehow, you do there are still ways you can have your cake and eat it too.

Let’s look at a real-world example. Let’s say we have a fleet of municipal busses. These busses drive around all day long on fixed routes, pickup up and dropping off people. These busses are nodes that can collect a lot of data. They have engine controller data on CAN or J1708. They have on-board peripherals like fare boxes, head signs and passenger counters. The have constantly changing positional data coming from GPS and/or dead-reckoning systems. They’re also moving, so they can’t be wired into a network.

Well we could send all of that data to “The Cloud”, or at least try it, but not only would it likely cause network problems, think of the cost. Yes, if you’re AT&T, Verizon or one of the mobile carriers, you’ve just hit pay dirt, but if you’re the municipality the cost would be astronomical. Hello $20 bus fares.

What’s the solution here? Well, first of all there’s a load of data that we have that’s near useless. The engine temperature, RPMs or oil pressure (or any of the other of the thousands of data points available from the engine controller) might fluctuate, but generally we don’t care about that data. We care about it only when it’s outside of a “normal” range. So we need Edge Analytics to be able to watch the local data, measure it, and react when some conditions are met. This means we can’t just use a “dumb” device that grabs data from the controller and forwards it on. Instead we need an Intelligent Device – maybe an Intelligent Gateway (a device with a modem) – that is capable of running logic.

Now when we’re out of the “normal” range, what do we do? Maybe we want to just store that data locally on the vehicle in a database and we can download it at the end of the shift when the vehicle returns to the barn. Maybe we want to send just a notification back to the maintenance team to let them know there’s a problem. Maybe we want to send a capture of a lot of a specific set of data immediately off to some enterprise storage system for further analysis so the maintenance team can order a repair part or send out a replacement vehicle. It depends on the scenario, and that scenario may need to change dynamically based on conditions or the maintenance team’s desires.

Positional data is also ever-changing, but do we need *all* of it? Maybe we can send it periodically and it can provide enough information to meet to data consumer’s needs. Maybe once a minute to update a web service allowing passengers to see where the bus is and how long it will be until it arrives at a particular spot. Or the device could match positional data against a known path and only send data when it’s off-route.

And remember, you’re in a moving vehicle with a network that may or may not be available at any given time. So the device has to be able to handle transient connectivity.

The device also needs to be able to affect change itself. For a vehicle maybe it puts the system into “limp mode” to allow the vehicle to get back to the barn and not be towed. For a building maybe it needs to be able to turn on a boiler.

The point here is that when you’re developing your Intelligent Systems you have to do it with thought. I’d say that it’s rare that you can get away with a simple data-forwarding device. You need a device that can:

– Run local Edge Analytics
– Store data locally
– Filter and aggregate data
– Run rules based on the data
– Function with a transient or missing network
– Effect change locally

Intelligent Systems are great, but they still need to be cost-effective and stable. They also should be extensible and maintainable. You owe it to yourself and your customer to do M2M responsibly.

Of course if you want help building a robust Intelligent System, we have both products and services to help you get there and would be happy to help. Just contact us.


Viewing all articles
Browse latest Browse all 2

Latest Images

Trending Articles





Latest Images