Recently I attended a meeting where management was strongly supporting the development of a maintenance program. Everyone from the CEO to most levels and areas of the company recognized the impact of a poor program due to some recent events. A relatively simple CMMS program was in place, some equipment was missing from the database, and the maintenance steps within the CMMS were non-existent (lacked detail). It was well known that the maintenance staff was busy, but the CMMS remained unused and most issues appeared to be firefighting and reactive. Maintenance staff were used for non-maintenance tasks and errands for operations.
With several significant issues impacting production, senior staff were trying to get a handle on the issue. It was determined that the planned maintenance and CMMS should be put in place as a tactical step to meet specific strategic corporate goals. The result was an issue that I have seen time and time again. In my position, which is now mostly on the executive and operations side of the business again, I observed with the managers’ point of view in mind.
Senior management, operations management, maintenance management, and maintenance supervision were in attendance. The senior manager pointed out that there had been requests for more resources in maintenance, but it had also been noticed that there were no measures related to identification of work performed. It was known that a rudimentary program had been started a number of years in the past, which had been discontinued. There was not doubt that work was being done, but no way of determining effective use of time, energy and resources. Older equipment was failing and records were spotty on how many failures and what general condition equipment was in. It was admitted that several attempts had been made but unsupported, and that the program was going to be restarted and supported.
The maintenance managers identified a lack of support in the past and that the system was not fully updated, implying that more work needed to be done before starting. Senior management assured that things had progressed (or regressed) and that there were additional pressures for the implementation of the program that were tied to the corporate strategy.
Maintenance supervisors were upset that there would be paperwork that would slow them down. They felt that what they really needed was more people and started listing their tasks, which included numerous tasks not associated with maintenance. Managements’ position was that they could not expand a broken program without measures and an understandable view of how time and resources were being utilized.
Operations was fairly neutral, appeared to support the program and made positive suggestions. From this discussion, a basic outline for kick-starting the program with the understanding that gradual improvements would be made was developed. Overall, the maintenance organization held the attitude that this didn’t work before, why should it work now, and management appeared to be frustrated that their efforts to support the maintenance program was falling on a broken organization.
In all, I was able to see both sides. I have seen it numerous times in organizations large and small. The end result varies based on where the strength of personalities lay than even a clear-cut, common sense program.
In the reliability and maintenance community we often ask, as I have heard time and time again in conferences, private conversations, and articles: “why won’t management listen to us?” or, “they don’t get it!”
What happens when:
1) Corporate management ‘gets it;’
2) Operations is neutral but supportive;
3) Walls, resistance, and demands are put in place by the maintenance organization even with management supporting the program?
How would you approach this situation?
Monday, January 31, 2011
Friday, January 28, 2011
On Technical Training
An area that is often wanted but also takes a back seat to other areas when things get a little tough is training. I have noted over the years that some managers even become concerned with the training process and have stated, "why train them when they might leave to get a job somewhere else." I have always thought, when I have heard such statements, that these managers lack vision and are relatively short sighted. In other cases, I hear that training is not a necessary function and 'gets in the way.' Again, a short sighted view showing reactive management versus strategic or tactical thinking.
The purpose of a proper training program is to meet the needs of the organization, have an impact on the bottom line, improve morale, and meet regulatory and customer requirements. You can evaluate a successful program by noting operational or functional successes of the organization and personnel. If the training program is ineffective, you do not identify changes in the organization, which requires review and improvement.
To understand the program, you must identify the basic makeup of the training organization. The highest level consists of five elements which make up the training strategy:
1. Management is the function of directing and controlling needs assessments;
2. Support is the function of maintaining all parts of the system;
3. Administration is the function of day to day processing and record keeping;
4. Delivery is the function of providing instruction to students; and,
5. Evaluation is the function of gathering feedback data through formative, summative, and operational evaluation.
The key to any successful training program is the identification of the gaps of knowledge within the organization through needs assessment. An effective program identifies those needs and then develops the training around it. The basic organization for the tactical application of the training program also consists of five components:
1. Analysis is the process used to identify critical tasks and identify the standards, conditions, performance measures and other criteria needed to perform each task. Training is based upon the tasks identified in the analysis and the results form the basis for that training. The analysis includes evaluating the gap through an understanding of the existing knowledge level and job/task descriptions and where personnel exist.
2. The instructional design is based on the analysis phase and is the point where the designers develop learning objectives, test strategy, test items, as well as high level design of the training. The instructional designer determines the strategies to be used and selects the instructional method and media. Existing materials and raw media may be reviewed in order to determine their applicability to the specific instruction under development. At this point, the implementation plan is also developed.
3. The instructional development is based on the design phase and is where the lesson materials, exercises, drills, and other instructional materials for both the student and instructor are developed. The media selected in the design phase is produced and all materials are developed.
4. After the instructional system has been designed and developed, the validation and evaluation have been completed, then the instructional program may be fielded in the implementation phase.
5. Evaluation is a continuous process that starts during the needs analysis and continues throughout the development and life cycle of the instructional system. Feedback from the evaluation process is used to modify the training program as necessary. To ensure continuing quality of the fielded training, operational evaluations consisting of both internal (classroom) and external (field/operations) evaluations provide the necessary feedback.
There is, of course, a great deal more involved in the training program, such as records requirements, regulatory requirements, etc. For instance, maintaining safety training records becomes vital in order to meet OSHA requirements. In all cases, the real development of the program by understanding the organization's needs is vital to the success of the organization.
The purpose of a proper training program is to meet the needs of the organization, have an impact on the bottom line, improve morale, and meet regulatory and customer requirements. You can evaluate a successful program by noting operational or functional successes of the organization and personnel. If the training program is ineffective, you do not identify changes in the organization, which requires review and improvement.
To understand the program, you must identify the basic makeup of the training organization. The highest level consists of five elements which make up the training strategy:
1. Management is the function of directing and controlling needs assessments;
2. Support is the function of maintaining all parts of the system;
3. Administration is the function of day to day processing and record keeping;
4. Delivery is the function of providing instruction to students; and,
5. Evaluation is the function of gathering feedback data through formative, summative, and operational evaluation.
The key to any successful training program is the identification of the gaps of knowledge within the organization through needs assessment. An effective program identifies those needs and then develops the training around it. The basic organization for the tactical application of the training program also consists of five components:
1. Analysis is the process used to identify critical tasks and identify the standards, conditions, performance measures and other criteria needed to perform each task. Training is based upon the tasks identified in the analysis and the results form the basis for that training. The analysis includes evaluating the gap through an understanding of the existing knowledge level and job/task descriptions and where personnel exist.
2. The instructional design is based on the analysis phase and is the point where the designers develop learning objectives, test strategy, test items, as well as high level design of the training. The instructional designer determines the strategies to be used and selects the instructional method and media. Existing materials and raw media may be reviewed in order to determine their applicability to the specific instruction under development. At this point, the implementation plan is also developed.
3. The instructional development is based on the design phase and is where the lesson materials, exercises, drills, and other instructional materials for both the student and instructor are developed. The media selected in the design phase is produced and all materials are developed.
4. After the instructional system has been designed and developed, the validation and evaluation have been completed, then the instructional program may be fielded in the implementation phase.
5. Evaluation is a continuous process that starts during the needs analysis and continues throughout the development and life cycle of the instructional system. Feedback from the evaluation process is used to modify the training program as necessary. To ensure continuing quality of the fielded training, operational evaluations consisting of both internal (classroom) and external (field/operations) evaluations provide the necessary feedback.
There is, of course, a great deal more involved in the training program, such as records requirements, regulatory requirements, etc. For instance, maintaining safety training records becomes vital in order to meet OSHA requirements. In all cases, the real development of the program by understanding the organization's needs is vital to the success of the organization.
Subscribe to:
Posts (Atom)