No matter where you are on your MBSE journey, you’ll want to know if it’s worth the trip.
How do you justify your MBSE investment?
You’re going to need to justify the investment in tools, time, and resources associated with your MBSE Journey. In our prior blog, we recommended starting your MBSE journey with requirements. Let’s use them as the basis for our justification example…
Accountants are in the business of cost savings. Systems engineering is in the business of cost avoidance; there’s a difference. Accountants want to know how much money you’re spending today that you won’t need to spend in the future (like the number of pencils or engineers you’re currently paying for that you won’t need in the future because of this investment). Systems Engineers are thinking about how to avoid risk and how to reduce the risk of something bad happening that will cost money in the future. Accountants are looking backward. Systems Engineers are looking forward.
As a young engineer on one program, I was designated as the person to deliver the bad news of a missed requirement that was going to necessitate an expensive redesign. After delivering the bad news at a program review, the program manager went off and, after the swearing was done, said, “… blankety-blank…Sampson, this is like driving a car and looking through the rear-view mirror; I can see that problem after I’ve run over it. Why don’t you give me something where I can see the problem coming so I can change my direction before I run into it?” This is a great way to describe why we’re doing integrated MBSE.
Justifying the right thing…
The first step in any justification is to know where you are, how many things you are making, how many defects you have, etc. When implementing integrated requirements, how many of you know how many requirements your engineers are handling today? If you don’t know that answer, you’re not alone. If you think you have it, please share it with the rest of us so we have a good benchmark answer.
Looking for requirement answers in various domains, in software, the claim is 1-2 requirements/features per software engineer. I remember one response to an Aero RFP response using 75 requirements/engineer data points. Asking ChatGPT that question, it claims that 50-100 requirements per engineer is a benchmark. If we take ChatGPT at its word, if you multiply that out for, say, 10k requirements on an airplane (not an uncommon number of requirements for an aerospace project that includes 2000+ regulatory requirements from the FAA alone), you will need ~100 engineers to do that job. Relying on the classic cost savings ROI, you would justify that investment by saying something like, “This new requirement tool enables an engineer to handle 200 requirements, allowing us to reduce the number of requirements engineers by half (i.e., 50 instead of 100)”. If this is true, we can keep up with more requirements per engineer. From prior blogs on this topic, of course, you know we are improving the wrong thing, we don’t just need to manage requirements. Remember, the value of requirements isn’t in managing them; it’s in where they go, showing up in your face so you can constrain your design decisions to meet them…
I imagine Samsung Galaxy Note 7 developers were managing requirements but failed to carry the battery safety requirements throughout the process where a battery casing was too small for the battery, causing pressure on the electrodes leading to overheating. The result was a the recall and discontinuation of the Galaxy Note 7, including the cost of replacements, logistics, disposal, etc., and untold reputation damage with airlines banning passengers from carrying the device on flights due to safety concerns. Imagine the ROI of carrying a requirement into the design and manufacturing processes to avoid this problem.
The recently canceled Mitsubishi regional Spacejet program is another case in point:
This from Feb. 8, 2023 The Guardian: After years of delays, Mitsubishi Heavy Industries admits building Japan’s first homegrown passenger jet was too difficult and probably not viable. “Some test flights were aborted because of air conditioning defects and other software problems, and the delays meant revisions to the original design were required”.
…they wrote off $7.6 billion on the failed program, which was very real money that couldn’t be spent on something else.
Boeing 777 had its share of cost overruns, discovering integration problems late in development, finding some 16,000+ integration problems on the first flight (see graph below shared by Boeing in a User Group presentation back in Oct. 2006). They stated it took them 18 months to work the problems out, delaying committed deliveries. Once the integration problems were worked out, Boeing delivered ~34 planes in 18 months once production ramp-up started at a cost of ~$150-$200 million each; that means the late discovered integration problems delayed $5.1-$6.8 billion in revenue (which is also real money).
Realizing they couldn’t afford to do it again on the 787, they launched an integrated interface management system using Siemens solutions that enabled a complete mapping of all functional interactions across the entire aircraft (~1.2 million interactions). If something changed in one functional sub-system, all other related sub-systems were informed and required an analysis and justification if/how the proposed change affected the local sub-system. If there was an effect requiring a change, downstream subsystems were informed, and so on, resulting in a completely managed ripple effect across the aircraft. Using this system, 787 was continuously integrated vs late discovered integration issues; the result was zero (0) squawks on the first flight (see graph below). How much was that worth?
ROI doesn’t matter anymore…
These ROI arguments/case studies are interesting but don’t really matter. Today’s systems are so complex that you can no longer afford to develop complex products with your classic “spec-design-build” practices. To survive and thrive, you must switch to continuous “integrate-THEN-build” practices (like Boeing 787). Boeing was able to start the 787 ramp-up with no delayed revenue impact from late-cycle functional integration issues.
So, we are talking about organization survival; ROI makes sense on incremental improvement, not fundamental process changes, which means the ROI numbers don’t matter. You are not doing business as usual because today’s products are no longer usual. You can’t do the job like you used to, and putting more people on the job will not solve the problem—product complexity is well beyond that. A quote from Quality-Guru W. Edwards Deming comes to mind: “It is not necessary to change; survival is not mandatory”
Based on our experience, you don’t know how much you’re going to avoid spending by doing MBSE, but the consequences are real nonetheless and will show up as real money (50% of your program budget/resources) late in development if you don’t do something. I recommend a paper I wrote back in the 90s that ended up on the recommended reading list at MIT on this topic, along with various systems engineering primers that include real-world case studies that will help your cause. The paper documents a “simple” accountant ROI denied justification decision and follows the consequences that cost 10x-50x the cost of the investment in real money. This paper won the 1997 INCOSE Best Paper award on this topic (also published in IEEE Spectrum)… “The Allegory of the Humidifier: A Case Study on Return on Investment in Systems Engineering”. It should give you some chuckles while reading and remind you of your own “humidifier experiences” as you work through your MBSE journey justifications. I expect some “oh yeah, I can top that” type stories in return.
Glad to help in your justifications… just let us know
Mark Sampson
Systems Engineering Evangelist