Category Archives: General

Maximize Oilfield Productivity with Data-Driven Optimization

The World is Non-Stationary. Here’s How to Deal with It.

The World is Non-Stationary

The world, well, just about everything in the universe, is non-stationary.  Non-stationary means it is constantly changing, what is, in one moment in time, isn’t quite the same a bit later.  This is a problem opportunity.nonstationarity

This is an opportunity especially in data analytics.  You spend a lot of time getting data together, synchronized, cleaned, converted into something useful, you build a model and put that model to use.  Then, non-stationarity starts eating away at your model like langoliers (if you’ve seen the movie). Bit by bit, the system you modeled changes.  It happens everywhere:  Demographics shift. An industrial process corrodes or fouls.  Resources and energies deplete. Instruments drift and fall out of calibration. Consumer trends change.  Data interrelationships change… you get the idea.

This happens always everywhere with rare exception.  You MUST deal with it otherwise the performance of your solution degrades, becoming less and less useful over time until it becomes useless.

How to Deal With It

Stage I: Adapt Your Models
Monitor the performance of your solution and calculate its “bias”, how far off it is, on average, using a walking window of time.  Use that bias to adjust up or down your model’s output to keep it aligned.  This will work fairly well for quite a while, until the inter-relationships, the functional relations, between the inputs to output(s) change enough to be notable. A that time your models are not just “off” but going wrong.

Stage II: Rebuild Your Models
After adaptation has run its course, when the inter-relationships in the data have notably changed, then rebuild your models with more recent information, dropping older data if necessary.  Since rebuilding a model is actually an optimization process, this sometimes can cause a discontinuity in your results.

Stage III: Run an Ensemble
One way around having rebuilding discontinuities is to run an ensemble of models, dropping older lower performing ones while adding newer top performers, thus getting a smoother blend of results as

Do It Autonomously!

If you need a human to maintain your models, re-calibrating them by hand, it’s not going to be done, or there will be resistance… who like to do work?!  People might actually not want to do performance testing because then they might have to recalibrate models.  In that case, things go bad fast.  Instead, do model maintenance automatically by the system.  We can in our Intellect server.  If your technology does not support it, it’s time for some new technology!  Set it up, let it run, just check in on it every now and then.

Wrapping Up…

Just about everything in the universe is non-stationary, constantly shifting through time. This is a problem opportunity to be a self-maintaining top performer, using autonomous self-maintenance.

What is YOUR experience and how did you deal with this analytical challenge?  Comment below.

If you liked this article, please share it.

Thanks!

Carl
President / CTO
BioComp Systems, Inc. / IntelliDynamics
Call me at 1-281-760-4007

How To Go from 40% to 100% Yield

A customer called one day, well, they weren’t a customer just yet.  They manufactured high tech weapons systems and “weather satellites” (wink wink).  They were suffering miserably.  They had their best team on a product and no matter what they did they could only get 40% yield.  F22Fighter000004153895SmallThat means for every 100 units made, 60 were bloopers.  No good.  Failures.  They had heard we help people in such situations, so I agreed to try and boarded a plane to LA.

The product had multiple subassemblies from various vendors and once assembled had to be “dialed in” using tunable resistors. The units then were tested for about 60 performance metrics in a lab.  Most units failed the critical tests. Some failing units could be tweaked into specification using the resistors, the others had to be disassembled or scrapped.  Manufacturing slowed to a crawl.

The Mother of Invention

I noticed the faults seemed somewhat random, but there seemed to be an interaction between subassemblies.  Also, each subassembly came with vendor test results that characterized it.  I guessed that it might be possible to determine which subassemblies would work best together, but the trick would be doing that BEFORE the unit is assembled.  What if I could virtually assemble a unit, algorithmically tweak the resistors, test it, and confirm the combination worked.  I could do that for a variety of subassembly combinations using a Genetic Algorithm, a great, very efficient combinatorial search technology.

The Solution

I got their data about each unit produced recently, good and bad, and all the resistor settings for each and also the associated subassemblies vendor data.  I asked them which performance characteristic to target first.  They gave me one that was important and failed frequently.  I then used our NeuroGenetic Optimizer tool to build models using subassembly characteristics and resistor settings as inputs and the product performance characteristic from the lab database as an output.  I soon discovered that such models were viable, they worked and they estimated performance pretty well on previously unseen assemblies.  I then stuck a good model inside a genetic algorithm that searched across all possible combinations of subassemblies looking not for the best combination, or even for a good unit, but in a way that all planned units to be produced would consider all subassemblies such that all produced units would pass.  No cherry-picking, leaving wasted “bad” subassemblies in the bottoms of bins.

We fired up the solution, which spit out a pick-list telling them which subassemblies to match-up by serial number.  They built a run of units.  Guess what.  They all passed that first important performance characteristic.  Yields stepped up to 55% immediately.  We then looked at the next most failing important performance characteristic, built models, put THAT model into the GA too.  All units now passed 2 performance characteristics.  Repeat.  60% pass rate.  We repeated and repeated and repeated until in the end we had 57 product characteristics in the system and they had 100% yield.  ALL UNITS PASSED!

Customer Gets Improvement Award

A while later the customer called me again.  “Ummm… is there a problem?”, I asked. “No”, they said, “We just wanted to tell you we received an award from the US Army for the best manufacturing performance improvement, like, ever.  Thank you so much!”.

I cannot tell you how satisfying that is, to get such calls from our customers.  We love it.

P.S. This project was the birth of our “Intellect” line of server software and desktop tools to bring the solution, and many others, to the industrial world globally.

Thanks!

Carl
President / CTO
BioComp Systems, Inc. / IntelliDynamics
Call me at 1-281-760-4007

The Hidden Analytics Ingredient: TRUST

You may have gotten the data right, cleaned it to perfection, built wonderful predictive models, provided the results to the user, it all looks great but if the user does not TRUST the result it won’t be used and all your work was for nothing.

trust

How To Gain the Users’ Trust

  1. Use Good Quality, Fully Representative, Data
    Quality here is not just removing outliers but fully representative of every possible condition going forward.  If you use data for your analysis that for some reason excludes possible future conditions and those conditions happen, your result may look erroneous.  The user seeing this error will no longer trust your solution.
  2. Use Explainable Technologies
    Users don’t like black boxes.  Use a modeling, prediction and optimization technology that can be explained to non-analytics savvy people.  If you must use a “black box” technology, such as neural networks, have a scheme to demonstrate it works and has captured the relationships in the data correctly.  If you can’t explain how the result was arrived at, or prove it correct, they won’t trust it.
  3. Intuitive Models
    The models you create must be “intuitive”.  That means if X goes down and the user expects therefore Y to go up, your model better do that or you best be in a position to prove why it should not.  This can be a discovery for the user, their rule-of-thumb is invalid or conditional, but some real convincing has to be done to change their mental model of reality.
  4. Self-Maintaining Solutions
    The system may have been good, trustworthy, at first launch but if performance degrades over time because the modeled process is changing (“non-stationary”) your model best keep on top of it, adapting as the world changes.  If the user goes and looks at the results weeks or months later and sees that they are a bit “off” (building an error bias), they will stop trusting it and eventually stop using it.  Use self-maintaining technologies because if a human has to maintain it, they won’t.
  5. Operate Reliably
    If the user goes to look at your results, they best be there and still be correct.  Make sure you have the data exception handling in place, handling spurious values for example, so that you issue a good result or perhaps no result with the reason why.  Make sure your solution’s technical performance and reliability are best suited to the needs of the application and the users.

Trust is the essential, often forgotten, ingredient in your predictive analytics project.  The user must TRUST the result else they won’t act on it.

Over to You

Have you experienced issues with trust of technologies?  I’d love to hear your experiences in the comments below!

Thanks!

Carl

7 Ways to Destroy Your Analytics Project

Analytics projects, especially ones involving prediction and optimization, can bring a ton of value to the organization. Better understanding, reduced costs, increased yields, less waste, higher rates and a plethora of benefits that can give very large returns on investment.

But, that hard work can be torpedoed easily.  So, somewhat “tongue-in-cheek”, I give you 7 ways to destroy your analytics project:

The face of a man being afraid of something.

The face of disaster.

Here’s how:

  1. Create Excessive Expectations
    You don’t even have to let your exuberance run like the wind.  Just promise nearly perfect results. When your project comes to an end and even though you may have delivered very good results with good benefits, it won’t matter, you didn’t achieve the expectations and management pulls the plug.
  2. Don’t Listen to the User
    Do the project just for the fun of playing with the technology.  Skip talking to the persons who are actually going to use the results.  When you have the best whiz-bang technical solution that nobody wants, watch the system wilt like an un-watered flower.
  3. Don’t Target a Business Objective
    Don’t worry, your project will be great for the company, somehow, right?  Just do it, show the results and wonder why no funding comes forth to continue on.
  4. Have a Central Group Think They Can Do Better or Cheaper
    OK, you’ve done all the right things, you’ve targeted business objectives, have the full buy-in of the end users, you have a great team delivering a fantastic solution and have documented value to the organization.  Everyone is happy.  Then the phone rings.  It’s the central analytics team at headquarters.  They want to see what you’ve done!  Great! They want to adopt it corporately!  Not so fast.  They want to see what you did so they can do it themselves, their way.  They shut you down by talking to their senior management to tell your senior management to stop and implement THEIR “corporate” solution, which fails and no one ends up with anything.
  5. Reassign or Retire The Champion
    New innovative solutions are almost always driven by a “champion”, someone who understands the business need and the technology, a true believer that drives the project home.  During or at the end of the project, transfer them to Albania or Alaska or Angola, maybe have them retire, and then watch the project die.
  6. Don’t Plan for Exceptions
    The world is perfect.  There is no such thing as bad data, or situations you were unaware of.  Don’t worry about it.  When the first glitch comes along watch the system fail.
  7. Don’t Go After the Plum
    In some cases prediction of something comes before being able to optimize it.  Optimization is where the real money is.  So, as a “safe” first step, do prediction, and do and do and do prediction.  Let the prediction project become the primary focus and eventually optimization becomes a distant thought.  When the prediction part does not deliver the optimization returns on investment, terminate the project.  (Yes, this happens!)

This was in a bit of humor, but unfortunately these things happen, more than we think or want.

Over to You

Do you, or did you ever have a good project go bad?  What was YOUR experience?  I’d love to hear it.

The Industrial Analytics of Things

Central Cloud or Distributed “Fog”?

There is a new technology “push” underway called the “Internet of Things” (IoT) and within the industrial context, the “Industrial Internet of Things”.  Devices and equipment are generating large volumes of data which need to be acquired, analyzed and used smartly to enhance operations.  There are two perspectives to this:

  1. Outside the OrganizationEngineerPipes300x200flip: Such as GE’s pulling data inward from customers running their turbines and locomotives, analyzing that data, predicting faults, anomalies and performing predictive maintenance.  In such cases a central cloud with access to that external data may make sense.
  2. Inside the Organization: Data is generated inside the organization by industrial process control systems, SCADA systems, vendors materials data, warehousing, MRP/ERP, production reporting, customer feedback from the web and customer service, etc. which needs to be archived, analyzed and used to better understand and enhance products and production.  In such cases a distributed cloud (“fog computing”) makes the most sense.

Internal Analytics: Sensitive and Secret

We, IntelliDynamics, operate in both spheres, we don’t care where the data comes from, but the vast majority of industrial operations fall under the latter: internal data used for internal purposes.  This data is very secretive and tightly controlled.  It involves secret recipes and details on the process technologies used to make products and involves closed-loop control of processes, a critical risk to human health, safety and environment, especially if you are using or making explosive or toxic compounds, operating oil and gas platforms, chemical manufacturing or other such situations.  These production environments, and their data, are tightly controlled with restricted access.  There is little desire by customers to place this data in some vendor’s “cloud” where physical control of the data is lost.

iiot

80 / 20 Rule

As a general rule, 80% of the data does not need to leave the location where it is created, but used locally to operate, control, understand and improve the process and products being produced.  This suggests the best strategy is to distribute analytics, prediction and optimal control close to the source of the data, not move it to a distant “cloud”.  The data is well controlled and only that which must move is moved, saving on bandwidth as well.

Tight Relationship with Control

Commonly our analytics are used to create optimal setpoints for control systems, to drive production smartly to attain multiple business performance objectives.  Most customers prefer to put our systems on the “control network”, in a locked cabinet next to the Distributed Control System (DCS), not in a data center thousands of miles away.  This reduces vulnerability to network problems, latencies and keeps data under lock and key in the four walls of the plant.

Distributed Fog Computing Analytics

Placing Analytics Right Where It is Needed

While some vendors may be building large centralized clouds, we will take the novel opposite approach for industrial analytics, one that our customers tell us they need and want: distributed “fog computing”, placed strategically throughout our customers’ organizations, with links to customer service for product performance and vendors for materials characteristics, process control for how products are made and quality control for the resulting product characteristics, to deliver them excellent visualization, understanding, prediction and optimization, right where it is needed and used.

Over to you

What do you think? Do you want your product, process and materials data in a vendor’s cloud? I’m also curious: We’re building this distributed cloud today—would you be interested in hearing how that goes in a future blog post? 😉

I cannot wait to chat over this with you in the comments!

Thanks,

Carl