I strongly recommend watching/reading the entire report, or the summary by Sal Mercogliano of What's Going On In Shipping [0].
Yes, the loose wire was the immediate cause, but there was far more going wrong here. For example:
- The transformer switchover was set to manual rather than automatic, so it didn't automatically fail over to the backup transformer.
- The crew did not routinely train transformer switchover procedures.
- The two generators were both using a single non-redundant fuel pump (which was never intended to supply fuel to the generators!), which did not automatically restart after power was restored.
- The main engine automatically shut down when the primary coolant pump lost power, rather than using an emergency water supply or letting it overheat.
- The backup generator did not come online in time.
It's a classic Swiss Cheese model. A lot of things had to go wrong for this accident to happen. Focusing on that one wire isn't going to solve all the other issues. Wires, just like all other parts, will occasionally fail. One wire failure should never have caused an incident of this magnitude. Sure, there should probably be slightly better procedures for checking the wiring, but next time it'll be a failed sensor, actuator, or controller board.
If we don't focus on providing and ensuring a defense-in-depth, we will sooner or later see another incident like this.
I have found that 99% of all network problems are bad wires.
I remember that the IT guys at my old company, used to immediately throw out every ethernet cable, and replace them with ones right out of the bag; first thing.
But these ships tend to be houses of cards. They are not taken care of properly, and run on a shoestring budget. Many of them look like floating wrecks.
Thanks for the summary for those of us who can't watch video right now.
There are so many layers of failures that it makes you wonder how many other operations on those ships are only working because those fallbacks, automatic switchovers, emergency supplies, and backup systems save the day. We only see the results when all of them fail and the failure happens to result in some external problem that means we all notice.
It seems to just be standard "normalization of deviance" to use the language of safety engineering. You have 5 layers of fallbacks, so over time skipping any of the middle layers doesn't really have anything fail. So in time you end up with a true safety factor equal only to the last layer. Then that fails and looking back "everything had to go wrong".
As Sidney Dekker (of Understanding Human Error fame) says: Murphy's Law is wrong - everything that can go wrong will go right. The problem arises from the operators all assuming that it will keep going right.
I remember reading somewhere that part of Qantas's safety record came from the fact that at one time they had the highest number of minor issues. In some sense, you want your error detection curve to be smooth: as you get closer to catastrophe, your warnings should get more severe. On this ship, it appeared everything was A-OK till it bonked a bridge.
This is the most pertinent thing to learn from these NTSB crash investigations - it's not what went wrong at the final disaster, but all the things that went wrong that didn't detect that they were down to one layer of defense.
Your car engaging auto brake to prevent a collision shouldn't be a "whew, glad that didn't happen" and more a "oh shit, I need to work on paying attention more."
The NTSB also had some comments on the ship's equivalent of a black box. Turns out it was impossible to download the data while it was still inside the ship, the manufacturer's software was awful and the various agencies had a group chat to share 3rd party software(!), the software exported thousands of separate files, audio tracks were mixed to the point of being nearly unusable, and the black box stopped recording some metrics after power loss "because it wasn't required to" - despite the data still being available.
At least they didn't have anything negative to say about the crew: they reacted timely and adequately - they just didn't stand a chance.
The fuel pump not automatically restarting on power loss may actually have been an intentional safety feature to prevent scenarios like pumping fuel into a fire in or around the generators. Still part of the Swiss cheese model, of course.
It wasn't. They were feeding generators 1 & 2 with the pump intended for flushing the lines while switching between different fuel types.
The regular fuel pumps were set up to automatically restart, which is why a set of them came online to feed generator 3 (which automatically spinned up after 1 & 2 failed, and wasn't tied to the fuel-line-flushing pump) after the second blackout.
Hopefully the lesson from this will be received by operators: it's way cheaper to invest in personnel, training, and maintenance than to let the shit hit the fan.
From your article - this answered a question I had:
> The settlement does not include any damages for the reconstruction of the Francis Scott Key Bridge. The State of Maryland built, owned, maintained, and operated the bridge, and attorneys on the state’s behalf filed their own claim for those damages. Pursuant to the governing regulation, funds recovered by the State of Maryland for reconstruction of the bridge will be used to reduce the project costs paid for in the first instance by federal tax dollars.
The vessel owner may possibly be able to recover some of that from the manufacturer, as the wiring was almost certainly a manufacturing error, and maybe some of the configurations that continued the blackout were manufacturer choices as well.
If everyone saved $100M by doing this and it only cost one shipper $100M, then of course everyone else would do it and just hope they aren’t the one who has bad enough luck to hit the bridge.
And statistically, almost all of them will be okay!
Although I was never named to a mishap board, my experience in my prior career in aviation is that the proper way to look at things like this is that while it is valuable to identify and try to fix the ultimate root cause of the mishap, it's also important to keep in mind what we called the "Swiss cheese model."
Basically, the line of causation of the mishap has to pass through a metaphorical block of Swiss cheese, and a mishap only occurs if all the holes in the cheese line up. Otherwise, something happens (planned or otherwise) that allows you to dodge the bullet this time.
Meaning a) it's important to identify places where firebreaks and redundancies can be put in place to guard against failures further upstream, and b) it's important to recognize times when you had a near-miss, and still fix those root causes as well.
Which is why the "retrospectives are useless" crowd spins me up so badly.
> it's important to recognize times when you had a near-miss, and still fix those root causes as well.
I mentioned this principal to the traffic engineer when someone almost crashed into me because of a large sign that blocked their view. The engineer looked into it and said the sight lines were within spec, but just barely, so they weren't going to do anything about it. Technically the person who almost hit me could have pulled up to where they had a good view, and looked both ways as they were supposed to, but that is relying on one layer of the cheese to fix a hole in another, to use your analogy.
Likewise with decorative hedges and other gardenwork; your post brought to mind this one hotel I stay regularly where a hedge is high enough and close enough to the exit that you have to nearly pull into the street to see if there's oncoming cars. I've mentioned to the FD that it's gonna get someone hurt one day, yet they've done nothing about it for years now.
Send certified letters to the owner of the hedge and whatever government agency would enforce rules about road visibility. That puts them "on notice" legally, so that they can be held accountable for not enforcing their rules or taking precautions.
The problem is that they are legally doing nothing wrong. Everything is done according to the rules, so they can't be held accountable for not following them. After all, they are taking all reasonable precautions, what more could be expected of them?
The fact that the situation on the ground isn't safe in practice is irrelevant to the law. Legally the hedge is doing everything, so the blame falls on the driver. At best a "tragic accident" will result in a "recommendation" to whatever board is responsible for the rules to review them.
All that applies for criminal cases, but if a civil lawsuit is started and evidence is presented to the jury that the parties being sued had been warned repeatedly that it would eventually occur, it can be quite spicy.
Which is why if you want to be a bastard, you send it to the owners, the city, and both their insurance agencies.
> Which is why the "retrospectives are useless" crowd spins me up so badly.
When I see complaints about retrospectives from software devs they're usually about agile or scrum retrospective meetings, which have evolved to be performative routines. They're done every sprint (or week, if you're unlucky) and even if nothing happens the whole team might have to sit for an hour and come up with things to say to fill the air.
In software, the analysis following a mishap is usually called a post-mortem. I haven't seen many complaints about those have no value. Those are usually highly appreciated. Thought some times the "blameless post-mortem" people take the term a little too literally and try to avoid exploring useful failures if they might cause uncomfortable conversations about individuals making mistakes or even dropping the ball.
Agree. I am obligated to run those retrospectives and the SNR is very poor.
It is nice though (as long as there isn't anyone in there that the team is afraid to be honest in front of), when people can vent about something that has been pissing them off, so that I as their manager know how they feel. But that happens only about 15-20% of the time. The rest is meaningless tripe like "Glad Project X is done" and "$TECHNOLOGY sucks" and "Good job to Bob and Susan for resolving the issue with the Acme account"
this is essentially the gist of https://how.complexsystems.fail which has been circulating more with discussions of the recent AWS/Azure/Cloudflare outages.
>Which is why the "retrospectives are useless" crowd spins me up so badly.
As Ops person, I've said that before when talking about software and it's mainly because most companies will refuse to listen to the lessons inside of them so why am I wasting time doing this?
To put it aviation terms, I'll write up something being like (Numbers made up) "Hey, V1 for Hornet loaded at 49000 pounds needs to be 160 knots so it needs 10000 feet for takeoff" Well, Sales team comes back and says NAS Norfolk is only 8700ft and customer demands 49000+ loads, we are not losing revenue so quiet Ops nerd!
Then 49000+ Hornet loses an engine, overruns the runway, the fireball I'd said would happen, happens and everyone is SHOCKED, SHOCKED I TELL YOU this is happening.
Except it's software and not aircraft and loss was just some money, maybe, so no one really cares.
> Basically, the line of causation of the mishap has to pass through a metaphorical block of Swiss cheese, and a mishap only occurs if all the holes in the cheese line up.
The metaphor relies on you mixing and matching some different batches of presliced Swiss cheese. In a single block, the holes in the cheese are guaranteed to line up, because they are two-dimensional cross sections of three-dimensional gas bubbles. The odds of a hole in one slice of Swiss cheese lining up with another hole in the following slice are very similar to the odds of one step in a staircase being followed by another step.
And there's the archetypal comment on technology-based social media that is simultaneously technically correct and utterly irrelevant to the topic at hand.
Note that "Don't make mistakes" is no more actionable for maintenance of a huge cargo ship than for your 10MLoC software project. A successful safety strategy must assume there will be mistakes and deliver safe outcomes nevertheless.
The big problem was that they didn't have the actual fuel pumps running but were using a different pump that was never intended to fulfill this role. And this pump stays off if the power fails for any reason.
The bad contact with the wire was just the trigger, that should have been recoverable had the regular fuel pumps been running.
In a well engineered control system, any single failure will not result in a loss of control over the system.
Was a FMECA (Failure Mode, Effects, and Criticality Analysis) performed on the design prior to implementation in order to find the single points of failure, and identify and mitigate their system level effects?
"Catastrophe requires multiple failures – single point failures are not enough.
The array of defenses works. System operations are generally successful. Overt catastrophic failure occurs when small, apparently innocuous failures join to create opportunity for a systemic accident. Each of these small failures is necessary to cause catastrophe but only the combination is sufficient to permit failure. Put another way, there are many more failure opportunities than overt system accidents. Most initial failure trajectories are blocked by designed system safety components. Trajectories that reach the operational level are mostly blocked, usually by practitioners."
> In a well engineered control system, any single failure will not result in a loss of control over the system
That's true in this case, as well. There was a long cascade of failures including an automatic switchover that had been disabled and set to manual mode.
The headlines about a loose wire are the media's way of reducing it to an understandable headline.
Most cargo ships have a single main engine with plenty of backup-less failure points. They are sort of engineered so these failures can't happen suddenly but you can help yourself to a bunch of videos on how substandard fuel and parts shortages cause week-long poweroffs in a middle of the ocean.
System designers and regulators are aware that the main engine is a single point of failure, but they generally consider loss of main engine power to not be an immediate emergency. There are redundant systems to retain electrical and hydraulic power, and losing motive power isn't generally an instant emergency. Power and steering together is an emergency, yes, and steering is degraded without power, but had they still been able to use the rudder they wouldn't have hit the bridge.
That was super helpful. I was assuming from skimming the text description that it was a failed crimp
A lot of people wildly under-crimp things, but marine vessels not only have nuanced wire requirements, but more stringent crimping requirements that the field at large frustratingly refuses to adhere to despite ABYC and other codes insisting on it
The good tools will crimp to the proper pressure and make it obvious when it has happened.
Unfortunately the good tools aren't cheap. Even when they are used, some techs will substitute their own ideas of how a crimp should be made when nobody is watching them.
While the US is still very manual at panel building, Europe is not.
So outside of waiting time, I can go from eplan to "send me precrimped and labeled wires that were cut, crimped, and labeled by machine and automatically tested to spec" because this now exists as a service accessible even to random folks.
When shipowners are willing to cut costs with sketchy moves like registering with a random landlocked African country, why should we believe they'll spend any time or effort reading/implementing NTSB guidelines? It isn't like there's some well respected international body like ITAO calling the shots
I know a little about planes and nothing about ships so maybe this is crazy but it seems to me that if you're moving something that large there should be redundant systems for steering the thing.
No, there was a larger failure: whoever designed the control system such that a single loose wire on a single terminal block (!) could take down the entire steering system for a 91,000 ton ship.
If you read the report they were misusing this pump to do fuel supply when it wasn't for that. And it was non redundant when fuel supply pumps are.
Its like someone repurposing a husky air compressor to power a pneumatic fire suppression system and then saying the issue is someone tripping over the cord and knocking it out.
There's a 3rd failure: the failure to install/upgrade dolphins that could deflect a modern containership, despite the identified need for such. That proposed project seems cheap in retrospect.
Yes, 100%. Lots of failures across the board here. Especially with large ships and how many different nations they might be registered in, I can't imagine it's easy to have a lot of regulatory oversight into their construction, mechanical inspection or maintenance schedules. I'm curious how modern ports handle this problem, feels like it could cause a ton of issues beyond just catastrophic ones like this one.
No.
Lots more :
It's because they were abusing a non-redundant pump to supply fuel to the generators. Which then failed, which ....
From the report:
> The low-voltage bus powered the low-voltage switchboard, which supplied power to vessel lighting and other equipment, including steering gear pumps, the fuel oil flushing pump and the main engine cooling water pumps. We found that the loss of power to the low-voltage bus led to a loss of lighting and machinery (the initial underway blackout), including the main engine cooling water pump and the steering gear pumps, resulting in a loss of propulsion and steering.
...
> The second safety concern was the operation of the flushing pump as a service pump for supplying fuel to online diesel generators. The online diesel generators running before the initial underway blackout (diesel generators 3 and 4) depended on the vessel’s flushing pump for pressurized fuel to keep running. The flushing pump, which relied on the low-voltage switchboard for power, was a pump designed for flushing fuel out of fuel piping for maintenance purposes; however, the pump was being utilized as the pump to supply pressurized fuel to diesel generators 3 and 4\. Unlike the supply and booster pumps, which were designed for the purpose of supplying fuel to diesel generators, the flushing pump lacked redundancy. Essentially, there was no secondary pump to take over if the flushing pump turned off or failed. Furthermore, unlike the supply and booster pumps, the flushing pump was not designed to restart automatically after a loss of power. As a result, the flushing pump did not restart after the initial underway blackout and stopped supplying pressurized fuel to the diesel generators 3 and 4, thus causing the second underway blackout (lowvoltage and high-voltage).
Thought the same, bridge is fallen on its entire length, sounds like a way to undersell it. Such an opportunity to pass on clickbait is interesting in this day and age.
Not really, because that's where that part of the investigation ends.
Pre-contact everything is about the ship and why it hit anything, post-contact everything is about the bridge and why it collapsed. The ship part of the investigation wouldn't look significantly different if the bridge had remained (mostly) intact, or if the ship had run aground inside the harbor instead.
The date for bridge completion was bumped from 2028 to 2030 already. I assume it won't be done until 2038. It is absolutely murdering traffic in the Baltimore area, not having a bridge. I would be super interested in seeing where every single dollar goes for this project, I assume at least 1/3 of it will be skimmed off the top.
"and WAGO Corporation, the electrical component manufacturer"
Sucks to be any of the YouTubers influencers today telling everyone they should use WAGO connectors in all their walls.
Seriously though, impressive to trace the issue down this closely. I am at best an amateur DIY electrician, but I am always super careful about the quality of each connection.
The WAGO connectors typically used in home wiring have a transparent plastic shell which lets you see whether the wire made it all the way through the spring clip. The ones shown in the NTSB video had an opaque shell around the spring clip.
I don't see anything in the report that suggests the connector failed. It sounds like the installer failed. Trust me, they can screw up twist connections too :)
I strongly recommend watching/reading the entire report, or the summary by Sal Mercogliano of What's Going On In Shipping [0].
Yes, the loose wire was the immediate cause, but there was far more going wrong here. For example:
- The transformer switchover was set to manual rather than automatic, so it didn't automatically fail over to the backup transformer.
- The crew did not routinely train transformer switchover procedures.
- The two generators were both using a single non-redundant fuel pump (which was never intended to supply fuel to the generators!), which did not automatically restart after power was restored.
- The main engine automatically shut down when the primary coolant pump lost power, rather than using an emergency water supply or letting it overheat.
- The backup generator did not come online in time.
It's a classic Swiss Cheese model. A lot of things had to go wrong for this accident to happen. Focusing on that one wire isn't going to solve all the other issues. Wires, just like all other parts, will occasionally fail. One wire failure should never have caused an incident of this magnitude. Sure, there should probably be slightly better procedures for checking the wiring, but next time it'll be a failed sensor, actuator, or controller board.
If we don't focus on providing and ensuring a defense-in-depth, we will sooner or later see another incident like this.
[0]: https://www.youtube.com/watch?v=znWl_TuUPp0
I have found that 99% of all network problems are bad wires.
I remember that the IT guys at my old company, used to immediately throw out every ethernet cable, and replace them with ones right out of the bag; first thing.
But these ships tend to be houses of cards. They are not taken care of properly, and run on a shoestring budget. Many of them look like floating wrecks.
Thanks for the summary for those of us who can't watch video right now.
There are so many layers of failures that it makes you wonder how many other operations on those ships are only working because those fallbacks, automatic switchovers, emergency supplies, and backup systems save the day. We only see the results when all of them fail and the failure happens to result in some external problem that means we all notice.
It seems to just be standard "normalization of deviance" to use the language of safety engineering. You have 5 layers of fallbacks, so over time skipping any of the middle layers doesn't really have anything fail. So in time you end up with a true safety factor equal only to the last layer. Then that fails and looking back "everything had to go wrong".
As Sidney Dekker (of Understanding Human Error fame) says: Murphy's Law is wrong - everything that can go wrong will go right. The problem arises from the operators all assuming that it will keep going right.
I remember reading somewhere that part of Qantas's safety record came from the fact that at one time they had the highest number of minor issues. In some sense, you want your error detection curve to be smooth: as you get closer to catastrophe, your warnings should get more severe. On this ship, it appeared everything was A-OK till it bonked a bridge.
This is the most pertinent thing to learn from these NTSB crash investigations - it's not what went wrong at the final disaster, but all the things that went wrong that didn't detect that they were down to one layer of defense.
Your car engaging auto brake to prevent a collision shouldn't be a "whew, glad that didn't happen" and more a "oh shit, I need to work on paying attention more."
Oh, it gets even worse!
The NTSB also had some comments on the ship's equivalent of a black box. Turns out it was impossible to download the data while it was still inside the ship, the manufacturer's software was awful and the various agencies had a group chat to share 3rd party software(!), the software exported thousands of separate files, audio tracks were mixed to the point of being nearly unusable, and the black box stopped recording some metrics after power loss "because it wasn't required to" - despite the data still being available.
At least they didn't have anything negative to say about the crew: they reacted timely and adequately - they just didn't stand a chance.
The fuel pump not automatically restarting on power loss may actually have been an intentional safety feature to prevent scenarios like pumping fuel into a fire in or around the generators. Still part of the Swiss cheese model, of course.
It wasn't. They were feeding generators 1 & 2 with the pump intended for flushing the lines while switching between different fuel types.
The regular fuel pumps were set up to automatically restart, which is why a set of them came online to feed generator 3 (which automatically spinned up after 1 & 2 failed, and wasn't tied to the fuel-line-flushing pump) after the second blackout.
Hopefully the lesson from this will be received by operators: it's way cheaper to invest in personnel, training, and maintenance than to let the shit hit the fan.
Why? It's cost them 100M (https://www.justice.gov/archives/opa/pr/us-reaches-settlemen...) but rebuilding the bridge is going to be 5.2Billion so if gundecking all this maintenance for 20+ years has saved more then 100M, they will do it again.
From your article - this answered a question I had:
> The settlement does not include any damages for the reconstruction of the Francis Scott Key Bridge. The State of Maryland built, owned, maintained, and operated the bridge, and attorneys on the state’s behalf filed their own claim for those damages. Pursuant to the governing regulation, funds recovered by the State of Maryland for reconstruction of the bridge will be used to reduce the project costs paid for in the first instance by federal tax dollars.
Isn't there a big liability insurance payout on this towards the 5.2 Billion, and if so won't the insurer be more motivated to mandate compliance?
The vessel owner may possibly be able to recover some of that from the manufacturer, as the wiring was almost certainly a manufacturing error, and maybe some of the configurations that continued the blackout were manufacturer choices as well.
Actually, to be even more cynical….
If everyone saved $100M by doing this and it only cost one shipper $100M, then of course everyone else would do it and just hope they aren’t the one who has bad enough luck to hit the bridge.
And statistically, almost all of them will be okay!
Although I was never named to a mishap board, my experience in my prior career in aviation is that the proper way to look at things like this is that while it is valuable to identify and try to fix the ultimate root cause of the mishap, it's also important to keep in mind what we called the "Swiss cheese model."
Basically, the line of causation of the mishap has to pass through a metaphorical block of Swiss cheese, and a mishap only occurs if all the holes in the cheese line up. Otherwise, something happens (planned or otherwise) that allows you to dodge the bullet this time.
Meaning a) it's important to identify places where firebreaks and redundancies can be put in place to guard against failures further upstream, and b) it's important to recognize times when you had a near-miss, and still fix those root causes as well.
Which is why the "retrospectives are useless" crowd spins me up so badly.
> it's important to recognize times when you had a near-miss, and still fix those root causes as well.
I mentioned this principal to the traffic engineer when someone almost crashed into me because of a large sign that blocked their view. The engineer looked into it and said the sight lines were within spec, but just barely, so they weren't going to do anything about it. Technically the person who almost hit me could have pulled up to where they had a good view, and looked both ways as they were supposed to, but that is relying on one layer of the cheese to fix a hole in another, to use your analogy.
Likewise with decorative hedges and other gardenwork; your post brought to mind this one hotel I stay regularly where a hedge is high enough and close enough to the exit that you have to nearly pull into the street to see if there's oncoming cars. I've mentioned to the FD that it's gonna get someone hurt one day, yet they've done nothing about it for years now.
Send certified letters to the owner of the hedge and whatever government agency would enforce rules about road visibility. That puts them "on notice" legally, so that they can be held accountable for not enforcing their rules or taking precautions.
The problem is that they are legally doing nothing wrong. Everything is done according to the rules, so they can't be held accountable for not following them. After all, they are taking all reasonable precautions, what more could be expected of them?
The fact that the situation on the ground isn't safe in practice is irrelevant to the law. Legally the hedge is doing everything, so the blame falls on the driver. At best a "tragic accident" will result in a "recommendation" to whatever board is responsible for the rules to review them.
All that applies for criminal cases, but if a civil lawsuit is started and evidence is presented to the jury that the parties being sued had been warned repeatedly that it would eventually occur, it can be quite spicy.
Which is why if you want to be a bastard, you send it to the owners, the city, and both their insurance agencies.
This is stupid. Unless you happen to be the one that crashes it won't be a factor at all.
> Which is why the "retrospectives are useless" crowd spins me up so badly.
When I see complaints about retrospectives from software devs they're usually about agile or scrum retrospective meetings, which have evolved to be performative routines. They're done every sprint (or week, if you're unlucky) and even if nothing happens the whole team might have to sit for an hour and come up with things to say to fill the air.
In software, the analysis following a mishap is usually called a post-mortem. I haven't seen many complaints about those have no value. Those are usually highly appreciated. Thought some times the "blameless post-mortem" people take the term a little too literally and try to avoid exploring useful failures if they might cause uncomfortable conversations about individuals making mistakes or even dropping the ball.
Agree. I am obligated to run those retrospectives and the SNR is very poor.
It is nice though (as long as there isn't anyone in there that the team is afraid to be honest in front of), when people can vent about something that has been pissing them off, so that I as their manager know how they feel. But that happens only about 15-20% of the time. The rest is meaningless tripe like "Glad Project X is done" and "$TECHNOLOGY sucks" and "Good job to Bob and Susan for resolving the issue with the Acme account"
this is essentially the gist of https://how.complexsystems.fail which has been circulating more with discussions of the recent AWS/Azure/Cloudflare outages.
> All the holes in the cheese line up...
I absolutely heard that in Hoover's voice.
Is there an equivalent to YouTube's Pilot Debrief or other similar channels but for ships?
>Which is why the "retrospectives are useless" crowd spins me up so badly.
As Ops person, I've said that before when talking about software and it's mainly because most companies will refuse to listen to the lessons inside of them so why am I wasting time doing this?
To put it aviation terms, I'll write up something being like (Numbers made up) "Hey, V1 for Hornet loaded at 49000 pounds needs to be 160 knots so it needs 10000 feet for takeoff" Well, Sales team comes back and says NAS Norfolk is only 8700ft and customer demands 49000+ loads, we are not losing revenue so quiet Ops nerd!
Then 49000+ Hornet loses an engine, overruns the runway, the fireball I'd said would happen, happens and everyone is SHOCKED, SHOCKED I TELL YOU this is happening.
Except it's software and not aircraft and loss was just some money, maybe, so no one really cares.
> Basically, the line of causation of the mishap has to pass through a metaphorical block of Swiss cheese, and a mishap only occurs if all the holes in the cheese line up.
The metaphor relies on you mixing and matching some different batches of presliced Swiss cheese. In a single block, the holes in the cheese are guaranteed to line up, because they are two-dimensional cross sections of three-dimensional gas bubbles. The odds of a hole in one slice of Swiss cheese lining up with another hole in the following slice are very similar to the odds of one step in a staircase being followed by another step.
And there's the archetypal comment on technology-based social media that is simultaneously technically correct and utterly irrelevant to the topic at hand.
The older I get , the more I trust people over rules.
Note that "Don't make mistakes" is no more actionable for maintenance of a huge cargo ship than for your 10MLoC software project. A successful safety strategy must assume there will be mistakes and deliver safe outcomes nevertheless.
The big problem was that they didn't have the actual fuel pumps running but were using a different pump that was never intended to fulfill this role. And this pump stays off if the power fails for any reason.
The bad contact with the wire was just the trigger, that should have been recoverable had the regular fuel pumps been running.
In a well engineered control system, any single failure will not result in a loss of control over the system.
Was a FMECA (Failure Mode, Effects, and Criticality Analysis) performed on the design prior to implementation in order to find the single points of failure, and identify and mitigate their system level effects?
Evidence at hand suggests "No."
"Catastrophe requires multiple failures – single point failures are not enough. The array of defenses works. System operations are generally successful. Overt catastrophic failure occurs when small, apparently innocuous failures join to create opportunity for a systemic accident. Each of these small failures is necessary to cause catastrophe but only the combination is sufficient to permit failure. Put another way, there are many more failure opportunities than overt system accidents. Most initial failure trajectories are blocked by designed system safety components. Trajectories that reach the operational level are mostly blocked, usually by practitioners."
https://how.complexsystems.fail/#3
> In a well engineered control system, any single failure will not result in a loss of control over the system
That's true in this case, as well. There was a long cascade of failures including an automatic switchover that had been disabled and set to manual mode.
The headlines about a loose wire are the media's way of reducing it to an understandable headline.
Most cargo ships have a single main engine with plenty of backup-less failure points. They are sort of engineered so these failures can't happen suddenly but you can help yourself to a bunch of videos on how substandard fuel and parts shortages cause week-long poweroffs in a middle of the ocean.
System designers and regulators are aware that the main engine is a single point of failure, but they generally consider loss of main engine power to not be an immediate emergency. There are redundant systems to retain electrical and hydraulic power, and losing motive power isn't generally an instant emergency. Power and steering together is an emergency, yes, and steering is degraded without power, but had they still been able to use the rudder they wouldn't have hit the bridge.
Video explanation: https://www.youtube.com/watch?v=bu7PJoxaMZg
That was super helpful. I was assuming from skimming the text description that it was a failed crimp
A lot of people wildly under-crimp things, but marine vessels not only have nuanced wire requirements, but more stringent crimping requirements that the field at large frustratingly refuses to adhere to despite ABYC and other codes insisting on it
> A lot of people wildly under-crimp things
The good tools will crimp to the proper pressure and make it obvious when it has happened.
Unfortunately the good tools aren't cheap. Even when they are used, some techs will substitute their own ideas of how a crimp should be made when nobody is watching them.
While the US is still very manual at panel building, Europe is not.
So outside of waiting time, I can go from eplan to "send me precrimped and labeled wires that were cut, crimped, and labeled by machine and automatically tested to spec" because this now exists as a service accessible even to random folks.
It is not even expensive.
It’s been noted that automatic failover systems did not kick in due to shortcuts being taken by the company: https://youtu.be/znWl_TuUPp0
When shipowners are willing to cut costs with sketchy moves like registering with a random landlocked African country, why should we believe they'll spend any time or effort reading/implementing NTSB guidelines? It isn't like there's some well respected international body like ITAO calling the shots
I know a little about planes and nothing about ships so maybe this is crazy but it seems to me that if you're moving something that large there should be redundant systems for steering the thing.
There are.[1] Unfortunately they take longer to employ than the crew had time.
[1] As it happens I open with an anecdote about steering redundancy on ships in this post: https://www.gkogan.co/simple-systems/
Thanks for this comment!
Shipping is a low-margin business. That business structure does not incentivize paying for careful analysis of failure modes.
Seems to me the only effective and enforceable redundancy that can be easily be imposed by regulation would be mandatory tug boats.
A label placed half an inch wrong on misleading affordance -> 200,000 ton bridge collapse, 6 deaths, tens of billions of dollars of economic damage
Instant classic destined for the engineering-disasters-drilled-into-1st-year-engineers canon (or are the other swiss cheese holes too confounding)
Where do you think it would fit on the list?
I guess this will still be bellow Therac-25 for CS and CE students, but above for EE, ME, and Civil Engineering.
The image brings to mind the Cisco ethernet boot infographic: https://www.cisco.com/c/en/us/support/docs/field-notices/636...
I can't believe I've never seen this. I literally laughed out loud when I got to the image. Thank you! Absolute gold
So there were two big failures: Electrician not doing work to code; inspector just checking the box during the final inspection.
No, there was a larger failure: whoever designed the control system such that a single loose wire on a single terminal block (!) could take down the entire steering system for a 91,000 ton ship.
They didn't.
If you read the report they were misusing this pump to do fuel supply when it wasn't for that. And it was non redundant when fuel supply pumps are.
Its like someone repurposing a husky air compressor to power a pneumatic fire suppression system and then saying the issue is someone tripping over the cord and knocking it out.
There's a 3rd failure: the failure to install/upgrade dolphins that could deflect a modern containership, despite the identified need for such. That proposed project seems cheap in retrospect.
Yes, 100%. Lots of failures across the board here. Especially with large ships and how many different nations they might be registered in, I can't imagine it's easy to have a lot of regulatory oversight into their construction, mechanical inspection or maintenance schedules. I'm curious how modern ports handle this problem, feels like it could cause a ton of issues beyond just catastrophic ones like this one.
No. Lots more : It's because they were abusing a non-redundant pump to supply fuel to the generators. Which then failed, which ....
From the report:
> The low-voltage bus powered the low-voltage switchboard, which supplied power to vessel lighting and other equipment, including steering gear pumps, the fuel oil flushing pump and the main engine cooling water pumps. We found that the loss of power to the low-voltage bus led to a loss of lighting and machinery (the initial underway blackout), including the main engine cooling water pump and the steering gear pumps, resulting in a loss of propulsion and steering.
...
> The second safety concern was the operation of the flushing pump as a service pump for supplying fuel to online diesel generators. The online diesel generators running before the initial underway blackout (diesel generators 3 and 4) depended on the vessel’s flushing pump for pressurized fuel to keep running. The flushing pump, which relied on the low-voltage switchboard for power, was a pump designed for flushing fuel out of fuel piping for maintenance purposes; however, the pump was being utilized as the pump to supply pressurized fuel to diesel generators 3 and 4\. Unlike the supply and booster pumps, which were designed for the purpose of supplying fuel to diesel generators, the flushing pump lacked redundancy. Essentially, there was no secondary pump to take over if the flushing pump turned off or failed. Furthermore, unlike the supply and booster pumps, the flushing pump was not designed to restart automatically after a loss of power. As a result, the flushing pump did not restart after the initial underway blackout and stopped supplying pressurized fuel to the diesel generators 3 and 4, thus causing the second underway blackout (lowvoltage and high-voltage).
The terminal blocks could also have been designed to aid visual inspection.
"Contact" is a weird choice of words.
Right? Like when I read that I thought we're talking a little paint-swapping.
No, we are not talking a little paint-swapping.
Yeah, when the word “allision” was right there!
Thought the same, bridge is fallen on its entire length, sounds like a way to undersell it. Such an opportunity to pass on clickbait is interesting in this day and age.
I’m not sure that the NTSB is really in the clickbait business. But yes, contact does seem to really be underselling the event.
Not really, because that's where that part of the investigation ends.
Pre-contact everything is about the ship and why it hit anything, post-contact everything is about the bridge and why it collapsed. The ship part of the investigation wouldn't look significantly different if the bridge had remained (mostly) intact, or if the ship had run aground inside the harbor instead.
The date for bridge completion was bumped from 2028 to 2030 already. I assume it won't be done until 2038. It is absolutely murdering traffic in the Baltimore area, not having a bridge. I would be super interested in seeing where every single dollar goes for this project, I assume at least 1/3 of it will be skimmed off the top.
The consensus seems to be skimming won’t occur. I’d encourage people to research the corruption of elected officials in the Baltimore area.
"and WAGO Corporation, the electrical component manufacturer"
Sucks to be any of the YouTubers influencers today telling everyone they should use WAGO connectors in all their walls.
Seriously though, impressive to trace the issue down this closely. I am at best an amateur DIY electrician, but I am always super careful about the quality of each connection.
The WAGO connectors typically used in home wiring have a transparent plastic shell which lets you see whether the wire made it all the way through the spring clip. The ones shown in the NTSB video had an opaque shell around the spring clip.
I think my attempt at humor butthurt a lot of WAGO fans. I used "seriously though" after in my actual... serious comment.
I don't see anything in the report that suggests the connector failed. It sounds like the installer failed. Trust me, they can screw up twist connections too :)