Raise Your Confidence by Strategically Stacking Evidence
Late-stage design just hit a snag—now comes the moment that separates guesswork from great engineering. We walk through a clear, repeatable method to investigate unexpected failures and make high-impact decisions with confidence. Instead of hunting for a perfect test, we set a confidence target and stack multiple forms of imperfect evidence until we close the gap.
If you’re navigating late-stage product development and want a calm, methodical way to move from 40% to 90% confidence, this framework will help you choose the next best step, allocate limited time and budget, and know when to stop.
Join the Substack for monthly guides, templates, and Q&A where I help you apply these to your specific projects. Visit qualityduringdesign.substack.com.
Hello, welcome to the Quality During Design Podcast. I'm your host, Dianna Deeney.
We're in month two of a three-month arc. Month one was in October, where we were talking about late-stage design decisions. We've gone through our product development process, everything has been going as expected, and then we hit a glitch. An unexpected thing happened. It could be a failure in the test lab. It could be that we got results that we weren't expecting. And now we're faced with a decision. What do we do? What design decision should we make?
In the last episode, we talked about better framing the problem and identifying what's giving us heartburn about it and assigning a confidence level in our design decision. And that led us to understand that the particular problem example from last month is a critical unknown. We need to do more testing or more investigation to really better understand the problem and improve our confidence about it.
So that's where we're heading into this month. Last month was called Frame It. This month is called Investigate It. And then the next month in December is going to be Choose It. So here in phase two, we're investigating more about our problem and deciding what to do about it and maybe when to stop.
Why Engineers Need a Systematic Approach
Just a note about these methods, or it seems like, you know, I don't need a system to be able to help me think through that. I'm an engineer, I have training, I'm good to go.
But what happens is when you're in a project and you're facing deadlines, you're facing scrutiny, it's a decision that is an important one. It could make or break the product, literally. We don't always operate with cool heads. And sometimes there is so much information and so many decision points and conversations going on about it that we get confused, we get a little bit lost. So that's why I turn to these systematic approaches. Okay, let's stop, let's take a step back, let's frame our problem.
So now that we've done that and we've decided we need to do more investigating, we don't want to just throw spaghetti at the wall and hope something sticks by doing everything we can. We can still be strategic about it, even when we're under time and cost limitations, especially when we're under time and cost limitations.
Introducing The Stacking Principle
Here's the basic idea and approach for this. You want to stack your evidence. We can call it The Stacking Principle, because great engineers don't find the perfect test to answer all answers. They strategically stack imperfect evidence until the confidence exceeds their decision threshold.
So, what do I mean by that?
We're heading into this problem with 40% confidence in our decision, which is low. And it's a high impact problem. So we want our confidence level to be pretty high. We want it really to be 80-90%. So, how do we fill that gap between where we are now, where we're getting heartburn, we're not feeling good about it, we were able to assign a confidence because we defined the problem, we framed it well. Now we have this gap of about 50% confidence that we need to fill.
We don't want to just start doing any old tests. We can be strategic about what it is that we do and when. So the key with evidence stacking is to stack up different methods of tests. It could be literacy searches, it could be expert consultation, analysis, component testing, system level testing, and well, field data if you have it. Sometimes we do. All of these are sources of information that we can use to boost our confidence.
Evidence Stacking and The Confidence Progression
The key with evidence stacking is we want to evaluate our boost in confidence after each iteration.
So let's say that we had an opportunity to talk with an expert. It didn't take a lot of time, the cost was low to moderate, and that was able to boost our confidence in our decision and our understanding by about 10%. And then we decide, well, okay, let's do some analysis, computer-aided analysis. We do some reliability life analysis, and that boosts us another 15%.
So now that we've talked with an expert and we've gone through some reliability analysis, that has boosted our confidence by 25%. So that's great. We haven't even tested anything yet.
But let's say we do want to test and we run an accelerated life test. And what we thought or expected was going to be the failure mode is not. Something else failed first. So we have more information, and now we may need to pivot. We can't just ignore this new failure mode. This is something that we have to analyze. So our confidence in our decision has gone down a little bit. Maybe it's reduced by 5%, depending on what it is.
As we're marching toward figuring out this problem and being able to make a decision, these hiccups are going to happen. They happen all the time. We think we're going one way and then we learn something new and we have to pivot. That's what makes product design so fun and challenging and frustrating, but yet also rewarding.
The Critical Decision: When to Stop
So here's a rule of thumb. You must decide the confidence level you need before you start based on the decision's impact. I mentioned before that if it's a high impact decision, you'll probably want your confidence to be 80 to 90%.
The other thing is, is if you do reliability type tests, accelerated life testing, and reliability analysis, like Weibull analysis, you can build in your confidence within your test setup. So you'll be able to adjust the level of stress and the number of parts that you're testing to match the confidence that you're aiming for in the results.
So when do you stop?
You stop investigating when you either hit the target or when the next test costs more than the value of the information gain. We'll talk more about that next month in the Choose It episodes.
Insight to Action and Call to Action
So, what's today's insight to action?
You're probably not going to find the perfect test to solve your problem and give you the answer. But you can stack different types of evidence. And that evidence can vary in time and cost and the level of confidence boost that it gives you. But adding the confidence boost and re-evaluating where you are after each step is a useful way to determine if you're on the right track, if you need to pivot, how far away you are, and to what extent you need to gather evidence. And it all goes back to the baseline of when we framed the problem.
So remember, the goal isn't the perfect test, it's strategic confidence stacking. Start treating confidence as a measurable metric that moves up and sometimes down based on evidence. That's The Stacking Principle.
And now you have the framework, but frameworks are only useful when you can execute them consistently. So if you're serious about approaching your problems differently in late-stage engineering product development, and you want to get more into defining confidence thresholds for your project, then you'll want to join us on Substack.
As a subscriber, you'll get the full posts, which are deeper dives into these topics. You'll have opportunities to ask questions particular to your own project, and you'll get a swipe file with the basics. So when you hit these emergencies in your projects, you can pull out the swipe file, get reminded about some of these techniques, and then apply them to your project.
Just visit us on Substack at Quality during Design. And as always, this episode and show notes will be at DeeneyEnterprises.com. And this has been a production of Deeney Enterprises. Thanks for listening!
Other Quality during Design podcast episodes you might like:
Design Input & Specs vs. Test & Measure Capability