1887

Abstract

Crude oil and natural gas are usually transmitted in metallic pipelines. These pipelines, in some cases extending for hundreds of kilometers, are inevitably exposed to harsh environment such as extreme temperature, internal pressure, corrosive chemicals, etc. Thus, at some point in their lifetime, metallic pipelines are highly expected to develop serious metal-loss defects such as corrosions, which, if left undetected and improperly managed, can cause catastrophic consequences in terms of both damaging the environment and loss of human life, not to mention millions of dollars as maintenance cost to be paid by the owning companies. To avoid such undesirable impacts, the oil and gas industry has recommended that pipeline monitoring and maintenance systems follow a standard safety procedure. The industry standard identifies three types of metal-loss defects, namely sever, moderate, and superficial, based on estimated dimensions of the defect. According to the standard procedure, a defect's depth plays a major role in determining its severity level.

To detect a metal-loss defect and estimate its depth, autonomous devices, equipped with strong magnets and arrays of magnetic sensors, are used on a regular and constant basis to scan the walls of the targeted pipelines, utilizing a well-established technology known as magnetic flux leakage (MFL). The principal concept behind the MFL technology is that when magnetized with two magnets of opposite polarities, a pipeline wall constitutes a magnetic field, in which lines of magnetic force flow through the wall (from the south pole to the north pole). In the presence of a defect, such as a crack, two new poles appear at the edges of the crack. The air gap between the new edges causes the magnetic lines of force to bulge out. The defect depths can be accurately estimated from the amplitudes of the observed MFL signals. However, due to the huge amount of obtained MFL data, manual and visual inspection of such data has proven to be time-consuming, tedious, inefficient, and error prone. Moreover, the cause-and-effect relationship between pipe defects and the shapes of MFL signals is not well-understood, meaning that traditional mathematical models are not available. Therefore, machine learning techniques seem very suitable for managing big data for ill-posed problems such as pipeline defects. Machine learning is a generic term for the “artificial” creation of knowledge from experience. An artificial system learns from examples and is able, after completion of the learning phase, to generalize, i.e., the system does not just memorize the examples, but it “detects” regularities in the learning data. In this, way the system can also evaluate unknown data. Machine learning techniques are applied in a wide range of fields such as automated diagnostic methods, detection of credit card fraud, stock market analysis, classification of nucleotide sequences, voice and text recognition, autonomous systems, etc.

In this work, we propose a machine learning-based approach for defect depth estimation in oil and gas pipelines. To reduce data dimensionality, representative and discriminant features were first extracted from the MFL signals; this in turn, resulted in speeding up the learning process and increasing the new approach performance in terms of estimation accuracy. Statistical methods, as well as polynomial series, were used to extract such meaningful features from 1353 data samples, and in total, 33 features were obtained. The data were organized as follows: 70% for training, 15% for testing, and 15% for validation. The features were fed into a Generalized Regression Neural Network (GRNN), a Radial Basis Neural Network (RBNN), and a decision tree. With the exception of the decision tree technique, both neural network-based techniques achieved a superior performance in terms of defect depth estimation accuracy compared to those obtained by service providers such as GE and ROSEN. For the GRNN, the estimation accuracies obtained are 87%, 81%, and 83% for the training, testing, and validation data, respectively (see Fig. 1 (a)). For the RBNN, the estimation accuracies obtained are 89%, 84%, and 85% for the training, testing, and validation data, respectively (see Fig. 1 (b)). The estimation accuracy obtained by GE is 80% within ± 10 of error-tolerance, and the estimation accuracy obtained by ROSEN is 80% within ± 15 of error-tolerance. The decision tree yielded the worst performance with estimation accuracy at 75% within ± 10 of error-tolerance.

Loading

Article metrics loading...

/content/papers/10.5339/qfarc.2016.EEPP2827
2016-03-21
2020-09-20
Loading full text...

Full text loading...

http://instance.metastore.ingenta.com/content/papers/10.5339/qfarc.2016.EEPP2827
Loading
This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error