Thursday, August 10, 2017

SMS And Captain’s Authority

There are several accident reports of Captains making one single decision which is leading to a fatal accident. The first officer of other flight crew members may have attempted to communicate with the Captain but without luck. Often investigations would assume that if another flight crew member would have interfered with the Captain’s duties the accident would have been avoided. When sitting at an office desk with 20/20 hindsight, these accidents could have been averted, but at the time and location of event the Captain and first officer were not performing anything else but what they were trained for.

Training is more than the official training where check-boxes are filled in. Training also includes normal operations or organizational expectations of priorities and unwritten rules. Air Florida 90 departing Washington National Airport VA, United 173 on approach to Portland OR, Air Ontario 1363 departing Dryden ON, Uruguayan Air Force 571 in the Andes Mountains and KLM 4805 departing Los Rodeos Airport are all examples of Captain’s decision as the final link of an accident. When a Captain is about to make a fatal decision a lower ranking flight crew member may view this as a responsibility under a Safety Management System program to make safety decisions and interfere with the Captains’ duties, or physically take control of the aircraft.

Major accidents have generated great safety improvements.
The Captain of an aircraft is a person who is acting as the pilot-in-command and having responsibility and authority for the operation and safety of the aircraft during flight time. Flight time is the time from the moment an aircraft first moves under its own power for the purpose of taking off until the moment it comes to rest at the end of the flight.

A Safety Management System does not override this regulatory requirement. The purpose of the Safety Management System is to operate with an additional layer of safety and improve safety by continuous or continual improvements. Continuous improvement is to make changes to the current processes for improvement, while continual improvement is achieved by identifying process capability and making changes to the capability of operations, or processes to produce a more desired outcome. The beauty of an SMS is that the Safety Management System contains a process for ensuring that personnel are trained and competent to perform their duties and that they are accountable to safety. The Captain must always be trained to be competent to make final decisions and perform duties as the final authority. This authority can not be removed from the Captain. Accountability within an SMS-world is for a person, without supervision, to comply with regulatory requirements, standards, policies, recommendations, job descriptions, expectations or intent of job performance and for personnel to be actively and independently involved. Derived from accountability comes a Just Culture, which is an organizational culture where there is Trust, Learning, Accountability and Information Sharing.

When an Enterprise expects a lower ranking crew member to interfere with the Captain’s duties, based on this person’s opinion, the Enterprise has neither trained the Captains nor other flight crew members to perform their duties. The Captain’s duties are the authority for the operation and safety of the aircraft, which includes analyzing any information available for decision making. The Captain is the ultimate authority for the safe operations of an aircraft and interfering with this authority is a regulatory non-compliance activity. Any air operator should have a training program in place where the lower ranking flight crew members has an opportunity to volunteer safety information to the Captain at any time during flight time without the authority to take operational control of the aircraft. When an Enterprise is widely accepting that a lower ranking officer has the authority to interfere with the Captain’s duties there is no opportunity for safety improvements since the Enterprise is relying on the non-captain to make decisions.

Major Accidents Generates Safety Improvements

After the Air Florida 90 departing Washington National Airport VA airlines began enacting policies to ensure that at least one and more seasoned crew member was on board planes at all times. They
Major accidents have generated great safety improvements.
also began reappraising the traditional unwritten rule that the captain could not be questioned. From that point onward, first officers were encouraged to speak up if they believed a captain was making a mistake. Applying this concept is SMS in an undocumented format, where the Captain has access to information from flight crew members to make the best decision for safe operations.
After the United 173 on approach to Portland OR training addressed behavioral management challenges such as poor crew coordination, loss of situational awareness, and judgment errors frequently observed in aviation accidents. Applying this concept is SMS in an undocumented format and accepts that human behaviours or human factors play a role in safety.
After the Air Ontario 1363 departing Dryden ON many significant changes were made to the Canadian Aviation Regulations. These included new procedures regarding re-fuelling and de-icing as well as many new regulations intended to improve the general safety of all future flights in Canada. Applying this concept is SMS in an undocumented format in that proactive measurements are implemented for continuous safety improvements.
After the KLM 4805 departing Los Rodeos Airport accident changes were made to international airline regulations and to aircraft. Aviation authorities around the world introduced requirements for standard phrases and a greater emphasis on English as a common working language.
Cockpit procedures were also changed. Hierarchical relations among crew members were played down. More emphasis was placed on team decision-making by mutual agreement, part of what has become known in the industry as crew resource management. Applying this concept is SMS in an undocumented format where an Enterprise accepts that not only knowledge, but also comprehension of data is vital to safety.
After the Uruguayan Air Force 571 in the Andes Mountains there were no major safety
Remember rules or comprehend safety.
improvements implemented. However, this is also to apply the concept of SMS where the risk level, based on data, is accepted or rejected. In this case the risk level for this type of accident to happen again was accepted and no major changes to safety were implemented. As knowledge and comprehension were gained, human factors later became a safety component which had been overlooked in 1972.

SMS is that aviation safety has no end. SMS is that current safety comprehension level may be different in a few years and that other latent hazards are discovered. SMS is continuous or continual improvements where every day is a new challenge to ensure complete safety for the traveling public.



CatalinaNJB

Wednesday, July 26, 2017

AC759 CLEARED TO LAND 28R

SMS Does Not Make Aviation Safer
On July 7, 2017 Air Canada 759 lined up the approach for landing on Taxiway Charlie at SFO. This scenario of parameters was set up for the worse accident in the history of aviation. Based on this incident, does the argument hold water that a Safety Management System makes flying safer? Aviation safety is determined by several factors, where one factor to make flying safer is how well an enterprise is applying SMS as an additional layer of safety in support of their safety processes. An enterprise that supports bureaucratic processes or processes that are designed to support the organization are check-box syndrome processes. These processes only have one goal, which is to control, but not manage, the operational Safety Management System.  In the public opinion, the blame-factor may be assigned to the Safety Management System requirement itself. However, the Safety Management System itself is fail-free system and telling a story of safety performance. Since it is a parallel system to the operational safety systems, SMS is collecting samples of data to be applied to processes as an additional layer of safety. SMS is a system which regularly checks in with the operations for a snapshot.  

What is SMS?
SMS is the “ugly duckling” in safety that is to blame when things go wrong.  When expectations,
which are only opinions, are developed as guidance material under an SMS and applied as prescriptive regulations, then safety operating processes are set up for failure. With this approach an SMS is not given the authority, accountability or opportunity to function within a just culture, where there is trust, learning, accountability and information sharing. When expectations are applied as prescriptive regulatory requirement the first task of SMS becomes to ensure that the check-boxes are correctly checked and completed. In a bureaucratic organization and operating with in compliance with the check-box syndrome, any reference to operational safety is determined by the status of their check-boxes. SMS is not to count the checked boxes, but it is hard work to make operational processes safer today then what they were yesterday.

What SMS Is Not
SMS is not the magic wand of miracles for accidents never to happen again and SMS is not a system where prescriptive expectations are applied as regulations. SMS is not a one-fit-all model and SMS is not a model where everything is acceptable. SMS is not emotions or opinions based and SMS is not where processes must conform to SMS design. SMS is not a system of perfect people or a system within a perfect virtual world. SMS is not the trial and error system and SMS is not a system with an end or beginning. SMS is not to roll the dice for an answer, but it’s to drop the marbles to see where they scatter.
There are a lot of things that SMS is not, but all of these things what SMS is not, are what SMS has become.

SMS Has Become A Conglomerate Of Opinions
SMS has become a conglomerate of opinions by the virtue of good intent to make flying safer. However, good intent and opinion have turned out to be the “killer-bee” of aviation safety. SMS has become so very complex that very few can explain why certain processes are applied or corrective actions are applied. Often these changes are made since the regulator is macro managing portions, or all of an enterprise. When a regulatory finding is given to an emergency response plan full-scale test because the test discovered deficiencies, then the SMS did not fail but was successful by the discovery of faulty processes. In an attempt to establish the utopia of safety, SMS has become a system where everything is defined as a safety issue and to the degree where safety itself has become virtual facts. Operational size and complexity is forgotten and Safety Critical Areas and Safety Critical Functions have no meaning.

Safety Critical Areas And Safety Critical Functions
An enterprise is failing their SMS unless their SMS includes Safety Critical Areas and Safety Critical
Functions. Anything else by to operate with an SMS for the purpose of improving safety is to support the red-tape of a bureaucratic enterprise where the processes are designed to support their design and not their operations.

Defining Safety Critical Areas and Safety Critical Functions is to place weight on areas of operations and functions within these areas. Not all areas of aviation are safety critical. In an organization where there are no safety critical areas or functions, the decision making process is simple, but without accountability. A Safety Critical Area could be night approaches, with a Safety Critical Functions being approaches to SFO or YCB at midnight during the month of July.  An approach to SFO may be safety critical function, while an approach to YCB in the High Arctic may not be safety critical. On the other hand,
an approach into YCB on a January day at noon might be a safety critical function. When all areas of aviation are assigned the same key or same weight to safety critical areas and functions, it becomes impossible to target areas for learning and training purposes. Safety now has become wishful thinking of a utopia of aviation where accidents never will happen again. Only when it is understood that an accident could happen in the future is when the SMS tools are ready to be applied to the operational level.

A vital question to ask for continuous safety is: Does Transport Canada accepts anything less from an Enterprise but that all areas are safety critical areas and all functions are safety critical functions? If they expect that all areas and functions of aviation must be equal safety critical it becomes a conflicting task for the operator to operate with an effective SMS. In a bureaucratic organization it is preferred that everything and everyone are equal.

An enterprise operating SMS without safety critical areas and functions are spinning their wheels. SMS is the NextGen of aviation safety where processes, or “how we normally do things”, are analyzed for variables and to what risk-level these variables are affecting operations.

Does One Incident Qualify As System Failure?
Yes, it does when criteria are met and based on guidance supported by Transport Canada, it does.
This practice has been established by awarding Enterprises system failure findings for one single failure to meet an expectation. System failures are any findings under CARs 107.02. Based on this one incident, since AC 759 approach most likely did not meet the expectation; qualify as an organizational system failure. However, coming to a conclusion that one irregularity is a system failure is rush to judgement unless that one irregularity is a function of time and comprehension. Often TC inspectors are rushing to judgement when issuing a system finding for one non-compliance with one expectation. Transport Canada inspectors may have good intentions when applying expectations as system failures, but have departed from the concept of SMS and further departed from the basic of their initial SMS training when SMS first became a regulatory requirement.

Transport Canada SMS Survey
In a survey published din the JDA Journal, April 13, 2017, the vast majority of Transport Canada Inspectors view themselves as having better knowledge of airline operations  than the operator themselves have and that TC inspectors are better qualified than the operator by fixing safety problems before they become accidents or incidents. Further, this survey identified correctly that SMS is to transfers responsibility for setting acceptable risk levels and monitoring safety performance is the responsibility of the operator themselves. This is a vital and valid point to improve safety in aviation since the system under SMS is operational based and not bureaucratic based for Transport Canada to accept the risk. SMS is the NextGen of aviation safety where the regulator is removed from operations. Another point in this survey is that 81 percent of inspectors surveyed predicted a major aviation accident soon. Nobody can predict a major aviation accident, not even a TC inspector. This survey reveals the bias of inspectors towards Canadian aviation operators and the inspector’s utopia view that they know best. Transport Canada Inspectors have yet to show any data collaborating SMS-failure statements over the last ten years. This survey was published as “A Learning Lesson For FAA”.

SMS Did Not Fail
SMS did not fail AC 759 on the approach to 28R at SFO. It was the operational practices, or how the flight normally is done that failed AC 759. If there was none, two or five aircraft on Taxiway Charlie is relevant to the potential catastrophic outcome, but irrelevant to the SMS processes. Since the quality of processes are unknown until the final output is known, it becomes vital to safety that the progress of operational safety processes allows for ongoing risk assessment and decision making by flight crew.  Quality Assurance is a result of variations in quality output by the same process.

Does SMS Make Flying Safer?
Yes, SMS makes flying safer and the SFO incident does not make flying any less safe.  Safety
management systems help companies identify safety risks before they become bigger problems. If one company ignores their own SMS tools available and the help SMS offers makes that one company less safe than if they had elected to apply their tools and predict the safety risk level of identified hazards. Flying doesn’t become less safe, or have a reduction in safety without SMS, but an organization without SMS lacks the opportunity for continuous safety improvements by applying SMS concept and principles. Over time an enterprise that is “listening to their SMS” is gaining grounds in safety and become safer in operations than non-SMS, or the “ignore-SMS” organizations.

Additional Layer and Parallel Approach
SMS is an additional layer of safety and parallel approach to operational processes. SMS does not make it safe or unsafe to fly, but SMS provides data, which is processed into information and then applied as knowledge and comprehension for safer operational practices. It is when this data has reached the level of comprehension that it can be integrated into a policy, design of processes and improvement of safety. In an SMS world it is still the flight crew, maintenance personnel, ground crew and the enterprise’s management that makes a difference in continuous safety improvement or continuous decline. However, SMS is an invaluable tool that some are overlooking but they are still expecting miracles from SMS. In addition, a contributing cause factor to the SFO incident is that
airports are outdated by its 1903 design without adapting to size, complexity and traffic volume.

AC759 Cleared To Land RWY 28R
Aviation Safety must always be viewed from the public’s point of view, which is an expectation of a pleasant experience and no incidents between boarding the airplane and deplaning at destination. For some passengers it might be comforting to know that Air Canada 759 did not end in a disaster, while for others this might be a horrified experience when learning about this incident to a point where they will never travel on an airplane again. Either way, the public is expecting quality performance of aircraft and flight crew. The flight crew of AC 759 on short final realized that something was wrong, but expected someone else, in this case the Tower Controller, to make a decision for them of what to do.  This principle of avoiding safety actions is outside the parameters of an effective SMS system.

When flying regularly, technical skills are improving while technical knowledge skills levels are reduced to the degree of application. The consequences is that the performance factor are unknowingly declining until it reaches the time limit of comprehension and reached the level of unacceptable performance risk factor. A flight crew that are not able to comprehend options available when there are lights on the runway they are cleared to land on  is an enterprise systematic failure of performance management and not individual errors. The flight crew made an inquiry to the Tower Controller as to why there were lights showing on a runway where they had been cleared to land without first initiate safety actions and deviate away from the hazard.

CatalinaNJB

Thursday, July 13, 2017

SMS Information In A Suitable Medium

What is a suitable medium and who should it be suitable for?  The unknown variable of this question is who is to decide if the SMS is in a suitable medium or not. Since there is a regulatory requirement is for SMS information is made available in a suitable medium the regulation is written with ambiguity to allow for differences due to organizational size and complexity, but it’s also written for the regulator to decide based their own opinion what is a suitable medium. Airports are the smallest in size enterprise required to operate with an SMS, where there could be a couple of employees, where one is the Airport Manager and SMS Manager, and the other is the Accountable Executive. A suitable medium may be different for an airport than a large airline with thousands of employees. 

At one time, gold and cow hide may have been a suitable medium.
A suitable medium may mean something totally different from person to person, from organization to organization or from time to time. Some years ago, an electronic manual would not be a suitable medium to maintain SMS information and today paper format manuals may be obsolete. For SMS information and documentation to be established in a suitable medium, it must be suitable to the user group. The medium does not need to be suitable to the regulators or to the Accountable Executive, but must be suitable to the users who are feeding data into the SMS system.

A medium, or system that is not userfriendly for the submission of data is a safety system which in the long run will become an ineffective reactive system. This system would not operate within the concept of SMS which is a system of a just culture with proactive safety initiatives and accountability. When a system is developed for the purpose of supporting the bureaucracy of an airport or airline, it becomes ineffective as a supporting tool while a system designed in support of the processes becomes an effective safety management tool. An SMS system is fail-free since it’s another layer of safety supporting the operational safety processes.

A bureaucratic enterprise of SMS tools is best recognized by their attempt to adapt processes by enforcing the design upon operations, rather than changing the design SMS to adapt to operational safety processes. In other words, when a process is in non-compliance with the designed process, the failure may be with the design and not with the process itself.

An attempt to improve safety by enforcing operational design processes onto operational processes may cause unintended effects. If the operational performance of an aircraft is in non-compliance with the design performance, it does not improve safety by enforcing the design performance. Most enterprises are not strictly bureaucratic or an adhocracy systems, but area flexible to accommodate for both a bureaucratic system with prescriptive policies and an adhocracy system for flexibility for safe operations within size and complexity of the operations.

Establishing the SMS system in a suitable medium becomes more than just checking the check-box that an individual has drawn the line in the sand and determined what medium is suitable as a one-fit-all medium. A suitable medium may include more than one medium and include both paper format and electronic format in addition to smart-phone apps. When applying multiple mediums as suitable a bureaucratic enterprise may have difficulties adapting and analyzing data from these different sources, while an adhocracy may have invented additional process to capture all data for analysis. An effective enterprise adapts to processes for continuous safety improvements.  

CatalinaNJB

Wednesday, June 28, 2017

The Fork In The Road Test

All roads lead to Rome and there are many different ways of reaching the same goal or objective. Finding the rootcause takes a road trip defining the turns and forks in the road. If travelling by air the course may take a detour by relying on the old ADF, or be more effective following a GPS course. There are several root cause analysis techniques and they all serve a purpose to improve safety and one rootcause model may be as effective, or ineffective as another. All rootcause analysis models are designed to establish at what time or location in the failed process a different approach could have made a different outcome. The 5-Why and fish bone rootcause analysis are widely accepted within the aviation industry and assumed to have established the correct rootcause. A risk assessment of substitute and residual risks is normally conducted after the rootcause analysis to identify if there are other or unexpected hazards by the implementation of proposed corrective action plan in the form of a new risk control strategy. As a compliance criteria, the enterprise monitors the cap with a follow-up as an assessment of the effectiveness of safety improvement.

Without knowledge one fork in the road is as good as another.
Monitoring the effect of corrective action is a standard procedure for follow-up of CAP implementation. Monitoring and follow-up may be dependant on seasonal differences or timeframes for collection of enough data to establish the effectiveness. If an enterprise has lost control of their safety processes and decided to implement corrective strategies The Fork In The Road Test is a tool to identify if steps in the evaluation process are taking shortcuts and jumping to a conclusion that the new strategy is the correct strategy. This shortcut is an attempt to break the wall in a maze to make it to the end of the tunnel without following the process path.

The Fork In The Road Test is to backtrack the process to establish where in the maze the failure of the wall was  and to establish the time and location in the future where the missing link of a CAP is. This does not imply that an incident or accident can be predicted, but it implies that knowledge is vital to predict the hazards affecting the process.

When building and operating out of an ice-runway there are many considerations affecting the design of the runway. Since the runway thaws out every year, it must be rebuilt the next season and located at the exactly same location to be validated as the same runway and applying the same instrument approach procedures. In addition to the runway itself, the ice movement over time offset the precision approach to a point where it becomes unusable. When applying The Fork In The Road Test to the runway the time begins at the time when the ice melted and the location begins at the location where the ice threshold was located.

It becomes simple to see The Fork In The Road if an aircraft landed on the ice in the spring when most of the ice had melted. The time can is backtracked and establish that a change in the direction, or taking a different turn at the fork in the road at that time would have made a difference. The Fork In The Road is not the time and location when the flight crew had to make decision to change flight path, but at what time and location the aircraft could have been expected to complete the flight without an incident. When the ice is melted the aircraft was doomed at the time of departure. On the
The Fork In The Road does not always take a straight path.
other hand let’s assume that the accident happened in the middle of the winter with a foot of ice and minus 45 degrees temperature. At this point it becomes a scientific task to establish where the fork in the road was. The task becomes to identify if there were special variations that caused the change of course or an incident, or if there was normal variations that were overlooked. E.g. melting ice would be a normal variation, blowing snow would be a normal variation, darkness would be a normal variation and ice-ridges would be a normal variation.

Let’s for a minute assume that the airplane hit an ice-ridge. This establishes the Fork In The Road at a time prior to the aircraft departed from civilization. The Fork In The Road is the one trigger that would, without doubt, made a difference for safe operations. In this scenario, it would have made a difference if the normal variation had been identified and runway inspected and assessed as safe for operations. The Fork In The Road Test is not applicable to events after departure since the departure itself locked the aircraft into a path where at some point in time the flight crew would have to make a reversal decisions or an incident would happen. The Fork In The Road Test had predicted a hazard of normal variations, but since the Fork In The Road Test was not applied the hazard was not identified.

At the world’s most dangerous airport a siren sounds about 45 minutes prior to arrival and before the aircraft departs for this destination. The Fork In The Road has been identified and the corrective action implemented at the time and location where it effectively makes a difference and aviation has become safer. The Fork In The Road is in the planning and decision to complete a pleasant flight.

CatalinaNJB

Saturday, June 17, 2017

SMS Communication

A small operator communicates differently with their personnel than how a mega-enterprise would communicate. An airport with 2 or 3 people, being an Airport Manager, SMS Manager and Accountable Executive may communicate verbally without much documentation, while a larger airport may use multiple levels of communications processes. Both operators must meet the same requirement of the expectation that communication processes (written, meetings, electronic, etc.) are commensurate with the size and complexity of the organization. When applying this expectation without ambiguity, or applying the expectation with fairness to the expectation itself, both operators are expected to apply the exact same processes in communication. The simplest avenue when assessing for regulatory compliance is to apply the more complex communication processes to both operators. When applying this approach, the smaller airport’s SMS becomes a bureaucratic, unprofessional and ineffective tool for safe operations.

Small airport communication has also changed with the times.
At first glance it looks great that a small airport is expected to operate with different communication processes than a large airport, but when analyzing all available options, there is very little tolerance, or none, to apply this expectation in any other way than a prescriptive regulatory requirement. It is only when the operation is understood that the expectation can be applied with ambiguity, and with unfairness to the expectation itself, for an effective communication process for any size and complexity airport operators.

When there is a finding at a small, or large airport that the communication did not meet the regulatory requirement through this expectation, in that the information had been forgotten, misplaced or incorrectly interpreted, the operator is required to identify policies, processes procedures and practices involved that allowed for this non-compliance to occur. This in itself, that an operator allowed for a non-conformance to occur is a statement of bias in the finding implying that the operator had an option not to let this non-compliance occur. If this option was available at that time, the operatory would have taken different steps. The reason the non-compliance occurred is that the option to make a change was not available at the time when it occurred. All systems within the SMS were not function property and often it is the systems of human factors, organizational factors, supervision factors or environmental factors. Reviewing a finding in 20/20 hindsight is a simple task and to point out what could have been done differently becomes the task of applying the most complex process. However, when the operations is in the moment, the options at that location and point in time are limited to snapshots only of information, knowledge and comprehension of the events.

Human factors in communication is today integrated in automation and not visible.
Since there is an assumption in the root cause analysis requirements that the non-compliance was allowed, the question to the finding is no longer what happened, but why the operator allowed this event to escalate beyond regulatory requirements. The difference between a “what” and a “why” question is that “what” is data and “why” is someone’s opinion. When the finding is issued during an audit by the regulatory oversight team, the answer to the “why” question becomes the opinion of one person only of that team. When the opinion of that person becomes the determining factor of a root cause analysis it has become impossible to analyze the event for a factual root cause.  It could also be that the answerer to the “why” question becomes a compromised, or an average of the views of all inspectors, in which the answer is no longer relevant to the finding, but to a process where all opinions are considered. On the other hand, when the “what” question becomes the determining factor, each link to what happened must be supported by data and documented events. If the events of the “what” cannot be answered first, or before any other questions are answered, it has become an impossible task to assign a root cause to the finding. When the “what” question is answered, then a change to the “what” may be assigned and implemented.

This does not imply that the “why” question is not to be asked, but it becomes a factor of how the “why” is asked, and if the determining factor to the “why” question is an agreement between several people in a group to assign an average of indifferences, or if the “why” question is answered to the “what” question. When applying the 5-why process, the answer, or root cause, is established by the answer to the first “why”, since the rest of the answers must be locked in to the first. The more effective root cause analyses are the “fish-bone”, the “5-why matrix”, or the “fork-in-the-road” test.

When the requirement of the second expectation, as stated in the root cause analysis document, that the non-compliance was allowed, it changes the first expectation within an SMS element of different communication based on size and complexity of the airport to a prescriptive regulatory requirement. The prescriptive requirement then becomes the common denominator for the event that was allowed to occur and must be applied to the most complex communication process. The simplest way to look at this is that when the “allowed to” is allowed to be applied to an event, it is assumed that human variations do not exist and that the system is operating in an undisputed perfect virtual environment.

CatalinaNJB

Friday, June 2, 2017

No Data, No History, No Event

Root cause analysis is to find the single cause of why an unplanned event happened, or a link in the process where a different decision would have made a different outcome. This does not necessarily imply that a different outcome would have avoided the unplanned event, but it may have happened at a different time or location and with a different outcome. The expectation of a different outcome is that the unplanned event would not happen.

When analyzing for the root cause the 5-Why process is often applied. Unless there is an unbiased process applied to the answers of the 5-Why process, the desired answers could be established prior to initiating the process and the answers are tracked backtracked from this desired answer. The fact of this is that most 5-Why processes only allows for one option for the root cause. Since the organization is determined to establish a root cause, the root could be established without applying the 5-Why process. This is the “checkbox” syndrome of establishing a root cause by applying the approved root cause analysis. The assumption is that as long as the paperwork looks good it must be the right root cause and operations must be safe, correct? No, this is not correct. An incorrect root cause is more unsafe than a know, but non-effective root cause, since the new and incorrect root cause has not been tested and the outcome is unknown. Assuming that the new root cause is effective is to assume that opinions are facts.

Find the roots that feeds life into the process.
A root cause analysis must include data from prior documented events. If there is no data, no history or no documented event a root cause analysis cannot be based on past experiences. A onetime event is not a trend and applying a root cause analysis to one event defeats the purpose for the safe operation of an airport or aircraft. If there is no data, there is no trend and are no prior events to compare to the analysis to. The key to success is to establish data and trends to determine the root cause and make changes to the processes to reduce or eliminate another unscheduled event or failure. E.g. should a runway edge light fail and there is no data of prior failures, the short term fix is to replace the lightbulb. This might not be what the regulators wants to see, but the fact is there is no data to justify a root cause, and, in addition, there is no data to justify that the burnt our light is not an acceptable risk. Over time an airport may track the burnt out lights (which is data) and over a period of 3-7 years establish a pattern of malfunctioning lights. With this information the airport may establish the root cause and change the lights at a reasonable time prior to the bulbs are expected to burn out. It’s as simple as that.

Another option is to apply best-practices or continuous safety improvement by collecting data from the light manufacturer of how many hours or cycles a runway edge light is expected to last. If this was done, a process to change these lights prior to lights burn out could be reduced from 7 years to 6 months and their safety goal to minimize burnt out lights achieved in a short time. By applying the data supplied by the manufacturer a 5-Why analysis may not even be necessary to establish the root cause.

When only one option of questions for a root cause the question must be answered first.
Let’s assume that an airport took the best-practices route and established a lifetime for runway edge lights. However, the lights still burned out before expected and became a frustration to airport management and an inconvenience for their customers. The next step is to collect data for a root cause analysis. In the process to decide what approach to take to collect data the 5-Why Matrix was applied. The result from this matrix was to mount wildlife cameras at the airport to see if there is any wildlife connection to the burnt-out lights. Over time it was discovered that the coyotes came and chewed the power cable and that the lights therefore burned out about 2 days later. This data could now be applied to a root cause analysis, or the location of the fork in the road, and the process of transferring power could be improved. In other words, but burying the cable underground and cover it up to the light, the long term corrective action had extended the intervals of replacing the lights.

Without data, there is no event, only opinions of events. Applying a straight 5-Why does not necessarily establish the correct root cause, since the answer is locked in after the first question is answered. For the 5-Why process to be more effective the application of a matrix moves the process out-of-the-box for a nonbiased result.


CatalinaNJB

Friday, May 19, 2017

Communicating or Transmitting SMS

There is an expectation that for an SMS program to conform to regulatory compliance the enterprise must have in place a process for safety authorities, responsibilities and accountabilities are transmitted to all personnel. If this process is not in place the enterprise a non-compliance System Finding under Canadian Aviation Regulation (CARs) 107.02 may be issued to the operator.  CARs 107.02 is system compliance regulation, or a design regulations for a regulatory compliant Safety Management System. When the design of the SMS is regulatory compliant, then the processes executing the SMS design must also conform to regulatory compliance. In other words, these two compliance components are the design component and the operations component.  For operators in Canada the design requirements are found in CARs 107.02, for both airlines and airports. The operations requirements are found in CARs 705.151 and CARs 302.500 respectively.

The manufacturing of this chain complies with the requirement to produce a chain.
When job descriptions are transmitted to personnel, in accordance with this expectation, the message may or may not reach the intended personnel. Transmitting is a one-way communication and it does not specific direct the communication to the intended recipient. If the communication only reach a recipient who is in a non-management position, this information may be overwhelming since it does not conform to the expectation of the person’s job description. Or, if the information transmitted reach senior management only, their response may be incorrect for their job performance expectations. This expectation that “Safety Authorities, Responsibilities And Accountabilities Are Transmitted To All Personnel” may be compliant to the expectation itself and also compliant with the regulatory requirements under operations. However, by following the “letter of the sentence” only there are other SMS required tasks that are missed and not being performed to acceptable levels. Since the interpretation of information became a conflict with the position of the intended audience there is a failure of the system.

The process in this example is functioning as expected, but the response to communication was in conflict to the job position established in the organization chart. The effect caused by lack of response was not just that the information was transmitted to incorrect positions and the job not done, but also that by not performing as the expectation intended, other parts of the SMS was crumbling and the system itself did not function.

Destroying a process could crumble a system.
It doesn’t matter how strong and well maintained 99 links in a chain are when there is one link that breaks. When the link is broken, there is a broken process somewhere that must be identified. Repairing the process by replacing the old chain with a new chain may not necessary work well, since this does not consider the process. It could be that this link in the chain was being grinded now and then by a grinding tool required for the process to function. Replacing the old with a new is an assumption that there is a manufacturing flaw without analyzing the operational processes. Then the next time it happens everybody is just as surprised as the 10 previous times. Often, the next step is to change chain manufacturer, or fire a person who authorized the supplier.

By not conforming to the intent of this expectation that “Safety Authorities, Responsibilities And Accountabilities Are Transmitted To All Personnel”, the system itself may fail and everyone is as surprised as the first time when it failed.


CatalinaNJB

Friday, May 5, 2017

Risk Management Differently

This is a blog with no relevance to any opinions, facts, research or science, but a trivial blog written for continuous improvement in safety by thinking beyond the horizons and outside the box. For continuous safety improvement to be effective thinking outside the box is vital for the collection of unbiased data and then bring this data back in the box to be analyzed for safety improvements. We don’t manage risks; we lead personnel, manage equipment and validate operational design for improved performance above the bar of acceptable risk level.

Improvements begins outside the box.
Risk level analysis is traditionally established by applying likelihood, severity and exposure. In a risk level analysis, the exposure is always equal 1 for the hazard to become a risk to aviation safety. Without exposure, there is no risk. Birds is a hazard to aviation safety. However, birds that are 100 miles away from the flight path are not a risk to aviation, but still classified as a hazard to aviation. Traditionally these risk levels are color coded, where green is acceptable, yellow acceptable with mitigation and red is not acceptable. There is often little or no scientific data behind these risk levels except for aircraft performance. Human factors, organizational factors, supervision factors and environmental factors are not included in these risk assessments. Human factors may affect the risk level differently one day than another day. Human factor, or the interaction between software, hardware, environment and crew and other human interactions are vital to aviation safety.

There are two elements to human performance: 1) technical knowledge and 2) technical skills. Knowledge is the theory of operations, while skills is the operations itself. At the initial licensing of a pilot, the candidate first must pass a knowledge test, and then a practical flight test. Without passing at an acceptable risk level, a pilot license cannot be issued. As the pilot is employed, this concept of refreshing both technical knowledge and technical skills becomes a concept of operational performance.

Normally a person’s retention of learning decreases with time when learning is not applied to operations. Much of the theoretical learning is not applied daily in the job, but occasionally with the use of checklist. The highest percentage-loss occurs in the first days and weeks after the leaning is completed and somewhat levels off after that. Since the learning is being applied in their skills performance by flying an aircraft daily, there is additional learning occurring on the job and their performance level of technical skills are improving in the days and weeks after the learning.

One enterprise was expecting their pilots to retain a 100% knowledge level one year after the training and would initiate the refresher course with the knowledge test and expect all candidates to be as proficient in knowledge as they were 365 days ago.  Since pilots only applied part of their knowledge regularly in the day to day job and learning was not encouraged, most of what was learned had been forgotten in 365 days. Since their jobs were dependent on passing the knowledge test, the candidates would do their own and personal refresher course the last 2-3 weeks prior to the official refresher course. When the test was take all candidates passed and the enterprise could proudly check off the box that their pilots had retained 100% knowledge in 365 days.

When assessing risk levels differently an enterprise would assess performance based on a pilot’s retention of knowledge and skills. Let’s assume the learning retention loss of knowledge is 20% per day for the first 84 days and from then on, the retention loss is 2% per day to 365 days. At the end of a year the total knowledge retention is 20%, or in other words, if the pilot took the test without studying after 365 days, it would be expected that the test result would be 20% of last result.

Their technical skills retention for pilots are not reduced after learning, but their performance is getting better since they are applying their skill in their day to day job and additionally being exposed to known and unknown hazards regularly. At the end of 365 days the pilot retention levels are 180% of what it was after the previous flight test.

When applying this data as a combined retention level factor of knowledge and skills, the pilots are performing at their 100% level after 365 days. After 5 years in the same job they are performing above their 100% initial level.

Performance factor most critical days are days 60-80.
The traditional risk level model is based on aircraft performance and pilots are expected to perform at their 100% performance level in both technical knowledge and skills. In addition, the traditional risk level matrix does only apply recommendations to accepted risk or rejected risk. A different risk matrix is to apply an action to the colors which are based on likelihood and severity. These actions are to communicate (green), monitor (yellow), pause (blue), suspend (orange) or cease (red) operations. Risk levels orange and red are applicable to aircraft performance where pilot qualifications does not impact aircraft performance limitations. When overlaying the knowledge, skills and performance factor graphs onto the risk matrix, the lowest level of performance represents knowledge, the highest skills and the middle is their performance level. A performance level should be above the monitoring (yellow) level for quality assurance of flight operations.


CatalinaNJB

Thursday, April 20, 2017

The SMS Manager


The person managing the operation of the SMS fulfils the required job functions and responsibilities to meet regulatory requirements. An effective Safety Management System is lead by a person who is technically qualified and understand the interaction of all systems. An airline and airport are to ensure that the person who occupy the position as SMS Manger is qualified to lead and manage for regulatory compliance. The regulations for an SMS manager is written with ambiguity, or written for being open to more than one interpretation. This is how a performance based regulation is written to allow for the application of size and complexity to conform to regulatory compliance in operation. Since there is no establish standards the enterprise must first establish their standards, or expectations and then the qualifications requirement for that position before the effective date of their SMS.
Ambiguity is in the design.
If the qualifications for an SMS manager is not established, or an unqualified person is placed in the position as SMS manager, there is no certainty within the enterprise that their SMS is a businesslike approach to safety with the SMS as an additional layer of safety.

One of the items an SMS manager is expected to lead and manage is to establish and maintain a reporting system to ensure the timely collection of information related to hazards, incidents and accidents that may adversely affect safety. An SMS manager with technical knowledge and intelligence of the specifics of operations may know and understand the effect of unsafe conditions also needs to be qualified to develop a reporting system to ensure timely collection of information. It is the task of the enterprise to ensure that the person in the SMS manager position has these skills required to develop and maintain a system. Without these skills applied to an SMS manager, an enterprise may slowly drift away from the regulatory performance requirements.

Another skill required is to identify hazards and carry out risk management analyses of those hazards. A safety risk management of hazards is to apply likelihood and severity of a hazard as it applies to the operations of an airport or airline. If there is no exposure to the hazard there is no risk involved. In my many years of analyzing Safety Management Systems I have heard the opinion from regulators that a pilot is exposed to an engine failure for each takeoff and the reasoning for this is that an engine failure could happen, and that the pre-take off briefing includes the actions in the event of an engine failure. This is true, that an engine failure is a hazard, but it is not true that a pilot is exposed to an engine failure at each takeoff. The exposure determines the risk and if not exposed to the hazard there is no risk. The preparation for an engine failure is a corrective action plan to action the risk if exposed.

As the root cause analysist, the SMS manager is defining time and location of the fork in the road.
Other skills required by an SMS manager are skills to investigate, analyze and identify the root cause or probable cause of all hazards, incidents and accidents, to monitor and analyze trends in hazards, incidents and accidents, to monitor and evaluate the results of corrective actions with respect to hazards, incidents and accidents, to monitor the concerns of the civil aviation industry in respect of safety and their perceived effect on the holder of an airline or airport certificate; and determine the adequacy of the training required to comply with regulatory requirements.
When the person managing the operation of the SMS fulfils the required job functions and responsibilities established by the policies, standards and job performance expectations the enterprise has established a foundation for an effective SMS with tailored job functions to one specific enterprise and not intended for duplication.


CatalinaNJB

Sunday, April 9, 2017

The Value of Safety

The last blog touched the value of safety and ROI on safety. There are several safety articles written about the return on investment of a Safety Management System with a return between 100 % and 600 %. All these ROIs are based on future predictions of a reduction in major accidents, operational incidents and hazards by applying the SMS tool. When applying an estimate of lack of future losses, the ROI does not represent the true value of safety, but a virtual value of safety. Virtual cash or virtual ROI is not an actual return based on facts or data, but an opinion and projection of a planned SMS. The value of safety is not the lack of accidents or incidents, but the total revenue generated by operations. SMS is a businesslike approach to safety and the value of safety should be applied in that manner.

Process Applications Are Limited To Technical Capability.
An investment in an airline or airport is the total cash invested in the operations. The return on this investment is based on several factors which at the end produces a profit or loss. A safety management system is neutral in producing profit of loss since it’s a system that does not produce or consume events and occurrences. A functional SMS is the financial comptroller of safety and a quality assurance program. In business, a comptroller is a management level position responsible for supervising the quality of accounting and financial reporting of an organization. As a businesslike approach to safety, SMS is responsible for supervising the quality of safety.  A financial commitment or investment affects all aspects of the organization. An investment in an aircraft or new runway affects other areas such as maintenance, customer service and training. Depending on how this single investment is promoted, marketed and managed may increase the overall ROI of the organization, or may incur a major loss. An aircraft or runway in itself is profit or loss neutral. It is the management of operations that generate a profit of loss. SMS is in this same manner accident or incident neutral, but affects outcomes based on how the SMS tool is applied. It is the application of SMS as a tool to manage and lead operations which generate the profit, losses, incidents or accidents.

Return on Investment of SMS is not the savings by a reduction of accidents or incidents, but the return of cash revenue generated by in-control processes and organizational based safety investment decisions. When purchasing an aircraft, the operator is basing their judgement on what safety-nets the manufacturer has implemented. When building a new runway the airport is basing their judgement on safety-nets applied by the construction company.  When customers decide to purchase a ticket, or an airline decide to operate out of a specific airport, their decision to purchase is based on what safety-nets and assurance, or process controls of these safety-nets the airline or airport have in place. Safety management, or leadership in process management, is the overarching tool in decision making and therefore the only profit generator in an organization.

Comfort on an airplane is important, but if there is an apparent lack of safety then other carriers are chosen. This is the same with an airport; if the runway is marginal short for operations then the airlines chose other less convenient airports of operations. Safety is therefore the only profit generator and when applied in a businesslike approach to safety the ROI is the cash returned in operations, and not the absence of accidents.

ROI projections may apply the cost of accidents, but it is not the true ROI. The true ROI is the SMS decisions that went into the process of purchasing a new aircraft, or extending a runway which contributed to the ROI and is the ROI of safety. As an ROI projection the value of safety may be applied as $1.00 per second of time spent on task as the investment, and the actual $1.00 per second spent on task as revenue. Since both airplanes and runways are ROI neutral, it’s the Safety Management System decisions that produced the ROI, or the profit of loss result. There is no single operation within aviation that does not assess for safety and the impact safety has on profit. Not as an impact of reduction in incidents or accidents, but on customer confidence level of operations.

SPCforExcel   Out of Control Tests.
Without SMS there is zero confidence level of operational safety. Operators without an SMS may believe that they have a 100% confidence level of safety. However, when mathematically calculated their confidence level of safety is 0% since there is zero data to justify their statement. With an SMS in place the operational confidence level of safety is at least 95% even if wishful thinking is for safety to be 100%. The other unaccountable 5% of confidence levels are so remote that times between intervals of one occurrence is imaginary, theoretical, virtual, or fictional.

A Safety Management System is the Constitution of an organization and the tool for operations within a just culture and accountability where the ROI is the fraction of out of control testes. Processes within the Safety Management System are analyzed in a Statistical Process Control (SPC) system with multiple test for out of control processes. Each one of these tests are assigned a weight and applied to the ROI. Without data the value of safety and ROI is just an assessment of opinions.




CatalinaNJB

Thursday, March 23, 2017

A Qualified Person Runs The SMS

When an airport operator or an air operator appoints a qualified person as the Accountable Executive, the options are wide open to appoint anyone in the organization who they see qualified to be responsible for operations or activities authorized under the certificate and to be accountable on
The AE is a position without performance requirements.
behalf of the operator for meeting the requirements of the regulations. The requirement to qualify as the AE is a person who has demonstrated control of the financial and human resources that are necessary for the activities and operations authorized under the certificate. This is a broad description of qualifications, but becomes limited to organizational structure of authority.

The appointment of an airport AE compared to an air operator AE is slightly different, since an airport certificate is issued to a land-surveyed area, while an air operator certificate is issued to an individual or a corporate body. An AE for an airport is responsible to the land-surveyed area, while the AE for an air operator is responsible to the board of directors. However, as operators both AEs are responsible on behalf of the certificate holder for meeting the requirements of the regulations, which one of them are the Safety Management System regulations.

An Accountable Executive requirement could also be a matter of identifying a person who leads the necessary cultural change of Just Culture and Quality Assurance Culture and how services are provided with safety assurance to the general public. Without an SMS there is no safety assurance.

Run the SMS as a businesslike approach to safety.
Aviation Safety Management System is the NextGen of aviation safety, where a cultural change is inevitable for an SMS leader to be successful. Culture change does not happen overnight, but over a lengthy period of time. For a Just Culture to develop, each individual in an organization must be acceptable to these changes. The Just Culture and Quality Assurance Cutlers are developed within an organization by personnel consuming data, applied learning to data for processing into information, engage their information in operational processes with an output of knowledge and by assessing this output and comprehend the systems involved in a change of culture. This change is culture is the Return on Investment (ROI)

SMS is a businesslike approach to safety, where ROI is vital to success. When a certificate holder is applying this businesslike approach to safety and appoints a qualified person as the Accountable Executive, the requirement of demonstrating control over human and financial resources is incidental to the ROI. When applying this concept the NextGen of Accountable Executive Leaders in Aviation SMS are born.  



CatalinaNJB

Saturday, March 11, 2017

AE Demonstrating Control Of Financial And Human Resources

In an SMS world the leader of the SMS program is the Accountable Executive, or the AE, who is playing the role as a sponsor of a specific project. A sponsor is a person who provides funding necessary, or a percentage of a project or activity carried out by someone else. A sponsor does not have direct inputs on the activities in the project, but can affect management decisions by withdrawing the sponsorship or increase their funding of both financial and human resources. Upon
Comprehension of the SMS is what defines human and financial resources.
the completion of the project there is no impact on the sponsor, other than the reputation of the project, or activity that was sponsored. It works the same way in aviation, being an airline or an airport, as long as the Accountable Executive can document their role as the sponsor of the project by the control of financial and human resources; the only repercussion to the operations is their pride and reputation from devastating audit findings.
It is assumed that the more funding and personnel that is assigned to a project, the more successful and safe the project will be. This wrongful assumption is supported by a requirement that a person, or position, is not designated as the role of the AE unless they have control of the financial and human resources that are necessary for the activities and operations authorized under the aviation certificate. With this requirement, an operator may be hesitant to restrict, or reject funding or personnel to the operations when the safety–card is applied.

There is no enforcement available when the AE restricts, reject or withdraw funding from an SMS program. The regulatory requirements is not directly linked to the funding itself, but to the result, or output of the Safety Management System. If an increase in funding from the sponsor, or AE, of an airline or airport as the only means to safety solutions would reduce the risk level, then it would be redundant, excessive, or even an injustice to the public to operate with a Safety Management System.

Under an SMS there is a requirement that the AE has control of financial and human recourses. Unless the operation is a sole proprietor or a corporation with 51% shares, there is no person who singlehandedly has control over these financial and human resources. With a requirement that the AE
Safety and SMS in aviation is not a contrast in colors, but shades of variables.
must demonstrate control of these resources, this demonstration task becomes to demonstrate by the quality of the safety management system itself. If the AE, as the sponsor of the program, is on a path to success or in a downward losing spiral the resources are available tools, but not the strategic solution to safety. The more resources that are inputted on the controls of an airplane in a spiral do not eliminate the spiral, but is assisting the spiral in an increasingly downward trend. For an airplane to exit a spiral it takes leadership, management, system understanding and application of resources to knowledge from data collected. Financial and human resources in aviation safety, or with an SMS, are not the solutions, but variables within the processes which are harnessed by their role within one system, for the total system to produce the desired, or planned, outcome.

An Accountable Executive who is defining the quality of an SMS system by human and financial resources only may not contribute to the safe operations of an airport or aircraft, while an Accountable Executive who is a leader within an SMS system applying strategic safety solutions is on a path of continuous safe operations.


CatalinaNJB

Friday, February 24, 2017

Defined Roles For The Accountable Executive

A system without defined roles has little or no chance to function as intended. Lets for a minute look at a None Destructive Testing system, or NDT. NDT is a system to detect “undetectable” flaws in a material, or if production process produces output of flaws in the material. There are different independent system within an NDT system and none of these systems are compatible to interact with any of the other systems. The major NDT inspection systems are X-ray inspection, ultrasound inspection, magnetic particle Inspection and fluorescent penetrant inspection.

A system of defined roles within a process.
The system of X-ray inspection is applied to inspect for flaws within a material to relatively fine and defined resolutions.  Ultrasound is also applied to inspect for flaws within a material, but to a relatively course and undefined resolutions. Magnetic particle inspection is applied to both internal and external material flaws discovery. The NDT inspection system applied for external inspection of flaws is the fluorescent penetrant inspection. Within an NDT system all these independent systems function to produce an outcome of an effective system that will function as it was designed to function. None of these methods of NDT inspections are inferior to one or the other, they are just a part of one total system to manage, or lead processes to produce a flawless output.

Within an SMS system the functions of each system, or defined roles are hidden within human factors. In contradiction to mechanical systems, human factors include unpredictable variations. These variations are harnessed by defining roles for each person within an SMS system.

The accountable executive has certain defined roles which the AE must perform for the total system to function, without expectations that another person will pick up where the AE did not fulfil the role. The role of an accountable executive is to fuel the SMS system with safety. An effective AE is involved strategically rather than operationally for the SMS system to have an anchor point within the organization.

Without defining the roles or to assume a person should know better is often how unsuccessful businesses are operating. The definition of those systems is for a person to apply variations and pick up the slack as they see fit. Within an NDT world of systems, this would compare to placing an
Fill in the blanc and play that role.
aircraft jet-engine disk in the dark room for fluorescent penetrant inspection and expect an x-ray image to be produced by that process. The only way, and the process of which flaws in the jet-engine disk is discovered is to apply the fluorescent penetrant inspection as the process was intended.

An accountable executive has defined roles for one reason only, and that if for the SMS system to be effective where processes are producing outputs of what the processes were designed for. This does not imply that the processes should not, or does not identify flaws within the activities, but rather confirm that the processes of an effective SMS system identify both normal variations and special cause variations. In an NDT world, a jet-engine disk could be acceptable without flaws, or the process, when applied correctly, could identify an unlikely flaw in the material.


CatalinaNJB

Saturday, February 11, 2017

Safety Burnout


About 4 years ago, the first blog in this post was written and published. Since then safety never got bored, or bunt-out, but became more exiting to work with as time went on. The first story was about the Safety Management System and how aviation has evolved from the “trial and error” method to proactive and no longer accepting that accidents happens. There was never an acceptance of accidents, but the aviation industry did not make necessary changes until it was too late and a catastrophic event had happened. Over the last decade or so, SMS in aviation has been accepted as the New Generation of Aviation Safety. It is widely accepted that there is no profit in operating without documentation of established safety processes.

Safety has become a task to check the job completed box.
There is still an ongoing debate of what SMS actually is with multiple and inconsistent answers given. SMS is simple in concept, but often buried in bureaucratic paperwork and presented to the aviation industry as a system that nobody can understand, except to be able to fill in the check-boxes and comply with opinions.

SMS is about job-performance and a confidence level of how safe the outcomes are of tasks completed. Some believe that it is possible to have a process for everything within an SMS system and a 100% confidence level that they are operating safe, but it isn’t. An organization without an SMS implemented my live by this myth since it is their justification based on opinion and not data.

SMS is a businesslike approach to safety and an additional layer of safety of what the aviation industry already had in place. An effective SMS system parallels the operations. In the pre-SMS days, an operator would call up a friend and ask how they would do certain things and how it works. For other issues, they would call up other friends and get information of the best and safest way to operate. This was not a businesslike approach, but a fly-by-the-seat-of-your-pants approach. Corrective actions were not initiated until the airplane took an unexpected turn.

The fly-by-the-seat-of-your-pants system, which is a follow-me-system, and a Safety Management System which is leadership system are two different system approaches and incompatible systems. However, there are some who insists that these two systems are merged by applying follow-me opinions as requirements and demanding compliance with the leadership and accountability system. With this attempt to merge two incompatible systems, applying failures to each opinion that are not met and continuing to make attempts to merge after the first one failed leads to safety burnout. There are two indicators of safety burnout, or being on a slippery-slope to safety burnout, which are the “check-box” syndrome and the “opinion-syndrome”.

Expectations and process on collision course.
As a businesslike approach to safety SMS becomes simple and enjoyable as safety goals are reached and continuous safety improvements achieved. Without allowing for an SMS to function the task to manage safety is to look backwards and dwell on failures when at the same time attempting to move forward with corrections. It is more convenient for regulators to see SMS in a backwards view, rather than allowing time for SMS to move forward. Without the business knowledge of applying a business-like approach to safety, but demanding immediate return on investment, SMS is viewed as to be in non-compliance. When an SMS is found in an opinion-non-compliance the most convenient process to change is to change the process for compliance with opinions. The regulators are admitting this themselves by referencing to SMS expectations for non-compliance findings, since expectations are nothing else but opinions.  

When these two opposing forces of a follow-me system and leadership system are colliding, it is creating a dysfunctional operating environment identified by SMS. As often, the messenger is being blamed and findings are given to the SMS system, when it should be given to the opinion that changed the performance of operations. When an SMS functions as it was indented, there will never be burn-outs, since there will always be another and new safety challenge to take on and move forward with excitement. Safety burnout is the result of check-box and opinion-based SMS compliance.


CatalinaNJB


Saturday, January 28, 2017

How Does One Know If An Organization Is Applying The Non-Punitive Policy

That an organization has a non-punitive policy is an organizational and senior management statement
of accountability and a commitment of support for improved job-performance. In many
With the stroke of a pen the reactions to future events are set.
organizations, personnel job-performance has reached its limits with little, or no room for improvements, until there are unexpected and major quality flaws or improper customer services discovered. Sometime these events are named “mistakes” which initiate the process of learning-stagnation and there is no further action to learn from this “mistake”. Applying the non-punitive reporting policy is a policy to learn from mistakes and train for improvements.

A safety management system should include any requirements for an effective and continuous improvement of the SMS system. A non-punitive policy is a system-design regiment to be included as any additional requirements for the safety management system. This requirement is not a specific non-punitive policy design requirement, but it becomes an operational performance criteria for the SMS to function effectively.  When assessing the performance of an SMS, there is a requirement to apply a policy for the internal reporting of hazards, incidents and accidents, including the conditions under which immunity from disciplinary action will be granted. Immunity is for a person to be protected, or exempted from something, especially an obligation or penalty.

Data is formatted to comprehensive information.
This requirement to include a policy where there is immunity from disciplinary actions is applicable to the organization itself, or to the senior management team and is not applicable to employees of the organization. However, the non-punitive policy the organization develops becomes applicable to their employees in just-culture environment as an occurrence reporting tool, and applicable to the management to ensure this policy is applied as intended.   

There  are several tools available to analyze operational data either to confirm that the policy is applied as intended or to discredit opinions of an effective policy. One of these tools is to analyze identified training and training records for the person who submitted a report. After the airport, or airline received the report they applied their non-punitive policy to the contributor's behavior. An organization may have a check-box marked on the paper-format report form, or in electronic format that the non-punitive policy was applied, but these check-box tasks does not qualify as data to confirm, or discredit that the policy is applied. Check boxes are not direct-data, but indirect-data as triggers for further actions. The confirmation of applying the non-punitive policy is found in the training records.        


CatalinaNJB