Iron and Steel Industry Safety Management: Using Deep Learning to Predict Unsafe Practices

Categories: Technology

Abstract

The economic strength of a country is measured or judged from the development of manufacturing industries. Industries help in eradicating unemployment and poverty in our country by providing many people with jobs. This renders manufacturing crucial for a country’s development and employment objectives. The rising competitiveness of India’s manufacturing companies is reflected in the country’s second position in the world in terms of competitiveness as per the 2010 Global Manufacturing Competitiveness Index2 (GMCI) prepared by the US Council on Competitiveness and Deloitte.

Due to increasing workplace complexity and socio-technical interactions, the occupational accidents are increasing.

To address this issue, safety management are collecting huge amount of proactive and reactive data. In this study, we are using text mining approaches to explore audit reports (proactive data) to identify factors which are hidden in the text data. And further use statistical techniques to establish relationship between the hidden factors. This will help in predicting the major factors of unsafe practices and risk associated with them from reported proactive data.

Get quality help now
Writer Lyla
Writer Lyla
checked Verified writer

Proficient in: Technology

star star star star 5 (876)

“ Have been using her for a while and please believe when I tell you, she never fail. Thanks Writer Lyla you are indeed awesome ”

avatar avatar avatar
+84 relevant experts are online
Hire writer

Through such an analysis, safety management can know which unsafe state of the workplace can lead to maximum number of incidents. Here, the data of Iron Making division is considered. The period considered is from April 2015 to March 2018. Data were recorded on a daily basis. The data considered is taken for the categories of Unsafe Act and Unsafe Condition in the steel industry.

The original code of practice on safety and health in the iron and steel industry was adopted at a meeting of experts in 1981. This new code, which reflects the many changes in the industry, its workforce, the roles of the competent authorities, employers, workers and their organizations, and on the development of new ILO instruments on occupational safety and health, focuses on the production of iron and steel and basic iron and steel products, such as rolled and coated steel, including from recycled material.

Get to Know The Price Estimate For Your Paper
Topic
Number of pages
Email Invalid email

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email

"You must agree to out terms of services and privacy policy"
Write my paper

You won’t be charged yet!

It does not deal with the mining of raw materials for iron and steel production, which is covered by the Safety and Health in Mines Convention, 1995 (No. 176), and by codes of practice on safety and health in coal mines (1986) and safety and health in opencast mines (1991), nor does it deal with the fabrication of commercial steel products.

Introduction

Iron is most widely found in the crust of the earth, in the form of various minerals (oxides, hydrated ores, carbonates, sulphides, silicates and so on). Since prehistoric times, humans have learned to prepare and process these minerals by various washing, crushing and screening operations, by separating the gangue, calcining, sintering and pelletizing, in order to render the ores smeltable and to obtain iron and steel. The iron and steel industries are among the most important industries in India. During 2014 through 2016, India was the third largest producer of raw steel and the largest producer of sponge iron in the world.

The industry produced 82.68 million tons of total finished steel and 9.7 million tons of raw iron. Today, steel production is an index of national prosperity and the basis of mass production in many other industries such as shipbuilding, automobiles, construction, machinery, tools, and industrial and domestic equipment. The development of transport, in particular by sea, has made the international exchange of the raw materials required (iron ores, coal, fuel oil, scrap and additives) economically profitable. The world’s pig iron production was 578 million tonnes in 1995 (see figure 1). The production of Iron Ore from year 2003-2012 is increasing non-linearly with time (see figure 2).

Iron Manufacturing is an industry where safe working procedures are important, as workers face many risks due to the nature of the job. The work environment is often hot and noisy, and work tasks regularly heavy and demanding on the body, and there is an always present risk for crushing injuries and burns. Anyone who has ever worked in the iron industry is aware of the high level of risk to which employees are exposed. Yet, many iron manufacturing companies have demonstrated that an accident-free environment is a practical and achievable goal. This evolution is not only the result of a greater awareness of a moral obligation but also the fact that the legal requirements have become more and more stringent. Another strong reason is the realization that safety excellence will act as a catalyst for better overall corporate performance.

To improve the performance of risk management in future projects, a few studies suggested project practitioners should learn the valuable lessons from previous accidents and embed the consideration of risk management into the development process of a project. Learning from the past is a fundamental process in project risk management that helps individuals and organizations understand when, what and why incidents happened, and how to avoid repeating past mistakes. One of the methods to avoid accident or settle it is by finding the most similar litigation cases in the history and finding this is one of the most challenging part of identifying accidents.

Fast retrieval of similar historical cases is becoming more and more important in dispute resolution due to several reasons: first, the factual information in construction accidents is relatively easy to determine; second, accident occurrence is normally attributed to multiple violations of labour laws, ordinances, regulations, or poor jobsite management with shared responsibilities or misplacement of instruments machines jobs and other materials which can cause fatal accident. In this project, automatic retrieval of similar cases from a Manufacturing plant of construction accidents using the techniques of Natural Language Processing is studied.

In this study, Deep Learning is used to classify and predict the major factors of unsafe practices and their risk potential based on the daily Visit Observation data from the Iron division of a steel plant. Deep Neural Networks are more complex neural networks in which the hidden layers performs much more complex operations than simple sigmoid or relu activations. Different types of deep learning models can be applied in text classification problems.

Convolution Neural Network (CNN) and Long Short Term Memory (LSTM) is used to classify the risk potential in this study. In Convolutional neural networks, convolutions over the input layer are used to compute the output. This results in local connections, where each region of the input is connected to a neuron in the output. Each layer applies different filters and combines their results.

Literature Review

Safety climate is a psychological phenomenon and a sub-component of safety culture, which is usually reflected in the shared workforce's perceptions about the state of safety at any particular time. It can provide an indication of the priority of safety in an organization with regard to other priorities such as production or quality (Adl et al., 2011) [1].

Recent approaches based on artificial neural networks (ANNs) have shown promising results for short-text classification. However, many short texts occur in sequences (e.g., sentences in a document or utterances in a dialog), and most existing ANN-based systems do not leverage the preceding short texts when classifying a subsequent one (Ji Young Lee & Franck Dernoncourt, 2016) [2]. In this work, models based on CNN and LSTM are worked on that incorporates the preceding short texts.

When the number of entries is increased, the computational complexity is also increased (Stas, Juhar, & Hladek, 2014) [3]. ML is often seen as an offshoot of statistics as far as data mining is concerned. It employs advanced models to make decisions based on its own cognizance (Du, 2017; Ranjan & Prasad, 2017) [4]. However, a purely statistical and purely ML approach is considered less competent, therefore a hybrid approach is usually preferred (Srivastava, 2015).

Artificial Immune System (AIS) based self-adaptive attribute weighting method for Naive Bayes classification uses immunity theory in Artificial Immune Systems to search optimal attribute weight values (Wu et al., 2015) [5]. Logistic regression is an efficient probability-based linear classifier. The problem of overfitting (data model memorizes the dataset instead of the learning procedure.) could be solved by using penalized logistic regression in active learning algorithm (Wang & Park, 2017) [6].

Occupational health and safety (OHS) representatives and committees are the principal form of employee participation mandated by legislation in Anglo‐Saxon countries, and therefore have a strong base. However, their existence precedes legislation in some significant cases. The effectiveness of the committees as a form of participation depended on a complex complementarity of variables, including relationship with unions, the nature of management commitment, the organizational industrial relations climate and the political and institutional macro environment, consistent with ‘favourable conjunctures’ theory (Markey & Patmore, 2011) [7].

Problem Statement and Objective

To predict the major factors of unsafe practices and their risk potential from the proactive data of Visit Observation (VO) from the Iron division of a steel plant by exploring the audit reports and identifying hidden factors in the text data. Classification models are used for the prediction of:

  • Unsafe practices description
  • Clothing and PPE
  • Orderliness
  • Position of People
  • Reaction of People
  • Rules and Procedure
  • Tools and Equipment
  • Risk potential
  • Fatality
  • Minor Injury
  • Serious Injury

Methodology

Text Classification

Text Classification is an example of supervised machine learning task since a labelled dataset containing text and their labels is used to train a classifier. In this project, text data for accident records is used to predict the risk associated with it.

Deep learning, or DNN, is a branch of machine learning that aims to build a model between inputs and outputs. Many applications of Deep learning can be found in image processing, speech recognition, drug and genomics discovery, time series forecasting, weather prediction, and demand prediction. One major criticism of deep learning (in non-vision-based tasks) is that it lacks interpretability that is, it is hard for a user to discern a relationship between model inputs and outputs. Careful hyper-parameter tuning is required to obtain optimal results, and the training process can take many hours.

Deep neural networks consists of many layers of linear or nonlinear functions to obtain the output values from inputs. It is a biologically-inspired method of building computer programs that are able to learn and independently find connections in data. Nets are a collection of software ‘neurons’ arranged in layers, connected together in a way that allows communication.

E ach neuron receives a set of x-values (numbered from 1 to n) as an input and compute the predicted y-hat value (see eq.1). Vector x actually contains the values of the features in one of m examples from the training set. What is more each of units has its own set of parameters, usually referred to as w (column vector of weights) and b (bias) which changes during the learning process. In each iteration, the neuron calculates a weighted average of the values of the vector x, based on its current weight vector w and adds bias. Finally, the result of this calculation is passed through a non-linear activation function g.

z=w_1 x_1+w_2 x_2+w_3 x_3+⋯+w_n x_n=w^T.x (1)

Now let’s zoom out a little and consider how calculations are performed for a whole layer of the neural network. We will use our knowledge of what is happening inside a single unit and vectorize across full layer to combine those calculations in into matrix equations. To unify the notation, the equations will be written for the selected layer [l]. By the way, subscript i mark the index of a neuron in that layer (see eq. 2 and eq. 3).

z_j^[l] =w_j^T.a^[l-1] +b_j (2)

a_j^([l])=g^[l] (z_j^([l])) (3)

As can be seen, for each of the layers we have to perform a number of very similar operations. Using for-loop for this purpose is not very efficient, so to speed up the calculation we will use vectorization. First of all, by stacking together horizontal vectors of weights w (transposed) we will build matrix W. Similarly, we will stack together bias of each neuron in the layer creating vertical vector b. Now there is nothing to stop us from building a single matrix equations that allows us to perform calculations for all the neurons of the layer at once.

Feature Engineering

In this step, raw text data will be transformed into feature vectors and new features will be created using the existing dataset. We will implement the following different ideas in order to obtain relevant features from our dataset.

  • Count Vectors as features
  • TF-IDF Vectors as features
  • Word level
  • N-Gram level
  • Character level
  • Word Embeddings as features
  • Text / NLP based features
  • Topic Models as features

Count Vectors as features

Count Vector is a matrix notation of the dataset in which every row represents a document from the corpus, every column represents a term from the corpus, and every cell represents the frequency count of a particular term in a particular document.

TF-IDF Vectors as features

TF-IDF score represents the relative importance of a term in the document and the entire corpus. TF-IDF score is composed by two terms: the first computes the normalized Term Frequency (TF), the second term is the Inverse Document Frequency (IDF), computed as the logarithm of the number of the documents in the corpus divided by the number of documents where the specific term appears.

TF (t) = (Number of times term t appears in a document) / (Total number of terms in the document)

IDF(t) = log e(Total number of documents / Number of documents with term t in it)

TF-IDF Vectors can be generated at different levels of input tokens (words, characters, n-grams)

Word Level TF-IDF : Matrix representing tf-idf scores of every term in different documents

N-gram Level TF-IDF: N-grams are the combination of N terms together. This Matrix representing tf-idf scores of N-grams

Character Level TF-IDF : Matrix representing tf-idf scores of character level n-grams in the corpus

Word Embeddings

A word embedding is a form of representing words and documents using a dense vector representation. The position of a word within the vector space is learned from text and is based on the words that surround the word when it is used.

There are four essential steps:

  1. Loading the pretrained word embeddings
  2. Creating a tokenizer object
  3. Transforming text documents to sequence of tokens and pad them
  4. Create a mapping of token and their respective embeddings

Text / NLP based features

A number of extra text based features can also be created which sometimes are helpful for improving text classification models. Some examples are:

  • Word Count of the documents – total number of words in the documents
  • Character Count of the documents – total number of characters in the documents
  • Average Word Density of the documents – average length of the words used in the documents
  • Punctuation Count in the Complete Essay – total number of punctuation marks in the documents
  • Upper Case Count in the Complete Essay – total number of upper count words in the documents
  • Title Word Count in the Complete Essay – total number of proper case (title) words in the documents

Frequency distribution of Part of Speech Tags:

  • Noun Count
  • Verb Count
  • Adjective Count
  • Adverb Count
  • Pronoun Count

Topic Models as features

Topic Modelling is a technique to identify the groups of words (called a topic) from a collection of documents that contains best information in the collection. I have used Latent Dirichlet Allocation for generating Topic Modelling Features. LDA is an iterative model which starts from a fixed number of topics. Each topic is represented as a distribution over words, and each document is then represented as a distribution over topics. Although the tokens themselves are meaningless, the probability distributions over words provided by the topics provide a sense of the different ideas contained in the documents.

Convolutional Neural Network

In Convolutional neural networks, convolutions over the input layer are used to compute the output. This results in local connections, where each region of the input is connected to a neuron in the output. Each layer applies different filters and combines their results (see figure 6):

Defining CNN

Text as a sequence is passed to a CNN. The embeddings matrix is passed to embedding_layer. Five different filter sizes are applied to each comment, and GlobalMaxPooling1D layers are applied to each layer. All the outputs are then concatenated. A Dropout layer then Dense then Dropout and then Final Dense layer is applied.

Training CNN

The number of epochs is the amount to which your model will loop around and learn, and batch size is the amount of data which your model will see at a single time. As we are training on small data set in just a few epochs out model will over fit.

Recurrent Neural Network – LSTM

Unlike Feed-forward neural networks in which activation outputs are propagated only in one direction, the activation outputs from neurons propagate in both directions (from inputs to outputs and from outputs to inputs) in Recurrent Neural Networks. This creates loops in the neural network architecture which acts as a ‘memory state’ of the neurons. This state allows the neurons an ability to remember what have been learned so far.

Updated: Feb 21, 2024
Cite this page

Iron and Steel Industry Safety Management: Using Deep Learning to Predict Unsafe Practices. (2024, Feb 21). Retrieved from https://studymoose.com/document/iron-and-steel-industry-safety-management-using-deep-learning-to-predict-unsafe-practices

Live chat  with support 24/7

👋 Hi! I’m your smart assistant Amy!

Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.

get help with your assignment