Failure Statistics Analysis Based on Bayesian Theory: A Study of FPSO Internal Turret Leakage

The load and corrosion caused by the harsh marine environment lead to the severe degradation of offshore equipment and to their compromised security and reliability. In the quantitative risk analysis, the failure models are difficult to establish through traditional statistical methods. Hence, the calculation of the occurrence probability of small sample events is often met with great uncertainty. In this study, the Bayesian statistical method is implemented to analyze the oil and gas leakages of FPSO internal turret, which is a typical small sample risk but could lead to severe losses. According to the corresponding failure mechanism, two Bayesian statistical models using the Weibull distribution and logarithmic normal distribution as the population distribution are established, and the posterior distribution of the corresponding parameters is calculated. The optimal Bayesian statistical model is determined according to the Bayesian information criterion and Akaike criterion. On the basis of the determined optimal model, the corresponding reliability index is solved to provide basic data for the subsequent risk assessments of FPSO systems.


Introduction
FPSOs are usually set up in harsh and complex deep-sea waters for their entire life cycle, which often spans over 10 years. Hence, FPSOs are subjected to extreme environmental loads caused by extreme sea conditions (big waves, currents, etc.). Hull structures and equipment also gradually degrade in highly corrosive environments, leading to serious challenges in ensuring the security and usability of FPSOs.
The reliability and safety of FPSOs have been addressed in previous literatures. Zhang et al. (2016) proposed a reliability evaluation method for the FPSO side-byside offloading mooring system. This method combines the structural system reliability theory with the weakest failure mode group. A real SBS offloading mooring system was chosen as the simulation object, and the results showed consistency with practical engineering experiences and hydrodynamic theory. Sun et al. (2017) studied the FPSO collision damage. A dynamic simulation in the time domain was used to predict the structural response resulted from a ship collision. Then, the artificial neural network (ANN) was trained on the basis of the collision data and then used as a substitute for the iterative runs of the finite element method (FEM) within the MCS procedure. A qualitative uncertainty analysis of the quantitative damage was also conducted. The proposed method can reduce the number of numerical calculations and eliminate the sample size limitation of MCS. Lu et al. (2018) presented a methodology that combines ANN and a genetic algorithm to solve the models of the dropped object collision damage and thereby reduce the computational complexity of the conventional FEM.
FPSO internal turret systems involved in oil and gas transmission are at risk of leakage, which may cause fire and explosion and threaten the safety of the entire FPSO system. Internal turret systems are widely used in the field, accounting for about 42% of all FPSOs in service. Hence, their reliability requires adequate attention.
The key components of an internal turret can be divided into four main parts according to function: (1) turret (T), including the turret shaft, turret cavity, main bearing, and lower bearing; (2) fluid transfer system (FTS), including multi-channel rotary joints and piping systems located in the upper part of the turret and connected by the turret transfer system (TTS); (3) TTS, which mainly refers to the middle manifold and turntable that can rotate around the turret, TTS connection turret, and FTS; (4) an interface system (IS), which refers to the chain buoys, mooring lines, flexible risers, and rotary joints used to view the platform structure and other auxiliary equipment.
Although the failures of internal turret systems are serious, the statistical failure information is insufficient. This inadequacy presents difficulties in summarizing the risk characteristics using traditional statistical analysis methods. The Bayesian method is an effective tool to deal with this problem. Applying the Bayesian method to modeling and analyzing reliability data can solve problems such as small sample size and unclear understanding of the failure mechanisms. As the Bayesian posterior distribution is a probabilistic description of unknown parameters, it can be propagated through complex models. The Bayesian reliability analysis method has been analyzed in previous studies. Gelman et al. (2003) studied convergence analysis and model test methods. Hamada et al. (2008) have summarized the theoretical development and application of Bayesian methods in reliability data analyses in recent years, focusing on the regression model, acceleration model, degradation model, and other models of failure time. They also introduced the goodness-of-fit test, model verification, design of reliability test program, and the basic data required for fault tree analysis. Ntzoufras (2009) presented Bayesian modeling and the necessary steps in detail, including the application of the Bayesian hierarchical model. The corresponding model was solved by the Markov chain Monte Carlo (MCMC) theory, and a detailed calculation case was demonstrated using WinBUGS software. Abaei et al. (2018a) developed a Bayesian Network for the reliability assessment and predicting the optimum design point of the mooring system. The proposed Bayesian approach could improve the safety of marine floating structures. Abaei et al. (2018b) introduced a safety framework for estimating the Touching Bed Probability (TBP) of a vessel transiting a water way. Luque and Straub (2016) used dynamic Bayesian approach to model the deterioration of structural systems. Reliability of a Daniels system and a steel offshore structure are accurately assessed based on the presented algorithm.
In the present study, two Bayesian models are proposed to calculate the probability of oil and gas leakage of FPSO internal turret systems. These two models are gradually updated with the failure data to reduce the uncertainty of the reliability analysis and make the calculation of the probability close to actual situations. In Section 2, the risk features of FPSO internal turret systems are described, and the Bayesian method is presented. In Section 3, two Bayesian models referring to failures of internal turret systems are established and compared. In Section 4, we summarize our findings and present specific comments about the proposed method.

Problem description
Oil and gas leakages are the main threats faced by FPSO internal turret systems due to the flammable and explosive oil and gas resources involved, the deterioration of the material properties (fatigue and corrosion of pipelines and seals), human errors, and failures of control systems. In particular, TTSs, which are subjected to heavy loads, are prone to malfunction. According to relevant studies (Paik et al., 2011;Kang et al., 2017;Jin and Jang, 2015;Meng et al., 2018), during the production phase of the FPSO, the occurrence of oil and gas leakages in internal turrets is relatively rare, but the consequences considerably affect the safety of employees and equipment, especially when large uncontrollable oil and gas leakages lead to fires and explosions. Through safety monitoring systems, the risks are easily discovered in the initial stage of leakages, and injuries and accidents are thus avoided. However, the maintenance of such systems involves a long downtime, which affects the economic efficiency of FPSO production. The failure information of the FPSO internal turret systems in the context of leakages is collected and presented in Table 1 according to Offshore Technology Report (AEA Technology Engineering Solutions for the Health and Safety Executive, 2001). These FPSOs are installed in the North Sea, UK.
According to Table 1, the TTSs are the largest contributor to the internal turret leakage. TTSs are subjected to not only physical loads but also high temperatures and corrosion caused by oil and gas. As a result, the probability of the material degradation rises. Establishing a mathematical model to calculate the leakage probability is difficult because of many factors that need to be considered, such as loads, temperature, and corrosion. Failure mechanisms are too complicated to be established because of the correlation among these factors. In this case, the use of statistical methods, such as the Bayesian method, is an effective approach, especially for the analysis of small samples.

Bayesian method and MCMC simulation
The basic principle of the Bayesian analysis is to continuously renew the established model using the updated information it possesses. The Bayesian statistical model con- Bayesian analysis can be divided into four main steps: establish the probability model, calculate the posterior distribution, analyze the posterior distribution samples, and generate a statistical inference.
In the Bayesian analysis, we need to use the Monte Carlo method to estimate the integrals involved according to the ergodic theorem. The two common types of MCMC are the Metropolis-Hastings (M-H) algorithm and Gibbs algorithm. Although the Gibbs method can quickly converge Markov chains to posterior distributions and presents a small correlation among consecutive samples, the application of this method requires the derivation of the full-conditional probability distribution of all parameters.
Satisfying such requirement is difficult. In the current work, the random walk M-H algorithm is used to simulate the posterior distribution. The following describes the steps of the M-H algorithm. θ In the process of sampling, the transition kernel density function must satisfy three conditions. First, the function must start from any subset of the parameter space within a limited number of steps and it can be moved to any position in the parameter space. Second, the function must not make the Markov chains periodic. Third, assuming represents a parameter describing the probability distribution of the sample, for all i and candidates the following should be satisfied.
The transfer kernel density function and initial value are selected according to the aforementioned principles. For , the iterative steps are as follows.
Step 1. Generate candidate point according to and ; Step 2. Calculate the acceptance probability of the candidate point; θ i Step 3. Generate according to acceptance probability; where u is a random number subjected to a uniform distribution (0, 1). When calculating the acceptance probability r, the principle of moving the candidate points to gain a high posterior probability density and a large transfer kernel density function is satisfied. Through multiple iterations of these steps, a random sequence that meets the requirements is generated. The number of iterations depends on the calculation requirements. Generally, higher requirements lead to more iterations.

Model availability 2.3.1 Model convergence analysis
Convergence analysis refers to whether the sample of the posterior survey is stable and converges to the target distribution. The convergence of the Markov chains is often judged from the numerical statistics and graphs.
As the generated samples are essential Markov chains that are generated on an iterative basis, the correlation cannot be ignored. If this correlation is strong, the information contained in the simulation sample is relatively small, thus indicating that the algorithm is poorly mixed. Such property indicates that the results obtained from the analysis are not reliable. One statistic that measures the correlation among the MCMC simulation samples is the autocorrelation coefficient.
where n is the number of simulation samples; k is the number of iterations between two simulation datasets, and for a given k, this value is usually called the k-order autocorrelation function of the parameters; and is the mean of the simulated samples.
For the calculated values of each order correlation coefficient, the first order is usually the largest and is a positive value. As the value of k increases, the correlation coefficient gradually decreases to zero, indicating that the linear correlation among the samples increases with the iteration interval and that the overall randomness of the samples is satisfactory.
In the convergence analysis of the posterior distribution, whether the statistical uncertainty and MC uncertainty related to the posterior sample are within the acceptable ranges needs to be determined. Statistical uncertainty is determined by the sample and its model. Under the framework of the Bayesian analysis, this uncertainty is measured by using a parameter's posterior standard deviation . MC uncertainty is derived from the approximation of model features. This uncertainty is usually measured by , which is commonly known as the MC deviation.
In Bayesian estimation, the posterior mean squared error of a Bayesian estimate can be calculated by Here, the square root is called the posterior standard error of .
is the expected condition distribution . When represents the posterior expectation of , The posterior variance can be denoted by . The posterior standard deviation can be approxim-ately calculated by the standard deviation of the posterior samples.
represents the standard deviation of .

SE(θ i |y )
In the MCMC algorithm, calculating the MC deviation requires correcting the correlation of sequences. Thus, can be estimated by where is the number of the posterior samples and is the k-th order autocorrelation coefficient of . Meanwhile, if is estimated by a first-order autoregressive process, then where is the first-order autocorrelation coefficient of . If of the parameters to be evaluated is smaller than 5% of , then the samples can be considered as convergent. In the M-H method, the convergence rate of the Markov chains is sensitive to the initial values. As a result, a sample path diagram and an ergodicity mean diagram are required for the convergence diagnosis.
The sample path diagram refers to a record of the samples generated by each iteration. Multiple Markov chains are usually generated from different initial points at the same time to prevent the Markov chains from falling into certain local areas of the target distribution. After a certain period, if the sample path diagrams are stable and cannot be distinguished from one another, then the sample is considered as convergent.
The theoretical basis of the MCMC method is the traversal mean value theorem; therefore, whether the Markov chain converges can be analyzed by observing the traversal mean. To prevent Markov chains from falling into certain support domains of the target distribution, we can examine whether the traversal mean of multiple Markov chains from a dispersed initial point converges. If only one Markov chain is used, the iterations should be enough to ensure that the chain reaches every part of the support. However, the conventional convergence diagnosis method has two disadvantages. First, the amount of information that can be obtained in a single Markov chain can sometimes exceed the amount of information obtained by the iterative comparison of multiple chains. Second, the traversal mean tends to be the same or similar and cannot completely guarantee the similarity of multiple chains. Thus, the similarity of chains is studied in the current work by examining the differences in the variances of chains; this approach is known as the most widely used Gelman-Rubin diagnostic method (Gelman and Rubin, 1992).
The German-Rubin method determines the convergence of chains according to the degree of mixing of different chains. This degree of mixing can be described by the variance. For multi-parameter Markov chains, the in-chain and mixed chain variances should be approximately equal when convergence is reached.
Assume that the expectation and variance of the posterior distribution exist. The length of iteratively generated chains through the MCMC method is , and is recorded as the t-th iteration of the j-th chain. The calculation formula for the variance between chains is where The mixed posterior variance can be expressed by Eq. (12).
If each chain fully converges on the posterior distribution, then should be approximately equal to the mixed variance estimate . Therefore, the ratio can be used to determine the convergence. Gelman (1996) pointed out that representing the number of iterations of the MCMC algorithm reduces the variance of the posterior distribution of digital feature estimates and that should be smaller than 1.1 or 1.2.

Model fitting test
When the information provided by a sample is insufficient, the cross validation method is usually adopted to avoid the repeated use of data and analyze any abnormal values of the data. The observation data are usually divided into subsets and . is used for the model analysis, and the cross prediction probability density function of is used for the model test.
The division of subsets presents a difficulty in analysis. In cases wherein data are insufficient, the cross prediction probability density function of was proposed by Draper (2012).
PPO i y i (posterior predictive ordinate) of is and of all ( ) are similar, the model can be considered to be acceptable.

Model comparison
In the Bayesian framework, the commonly used standards for the comparison of models are the Bayesian factor (BF), the Bayesian information criterion (BIC), the deviation information criterion (DIC), and the Akaike criterion (AIC). The Bayesian factor involves complicated multi-integration and is computationally difficult. Therefore, in the Bayesian model selection based on MCMC, the BIC, DIC, and AIC are the most commonly used estimates. The AIC is an approximation of the K-L distance and is often used to measure the distance between two probability density functions. For a multi-parameter statistical model, such as a hierarchical model of multi-cell data, the number of parameters to be estimated is uncertain. At this time, the common DIC solves the model selection problem when the parameter space is difficult to determine. The application of the BIC requires the evaluation of a clear number of parameters. Therefore, the DIC can be regarded as an extension of the BIC. Diciccio et al. (1996) pointed out that the BIC can also be used as an approximation of the Bayesian factors. The BIC, DIC, and AIC are all in the same basic form of where is the estimated value of the model parameter vector (such as the posterior median, posterior mean or mode, etc.), k is the dimension of , is a function that depends on the criterion of different information, and IC represents an information criterion. In these three methods, the models are selected on the basis of the differences in the information criteria.
where is the information criteria of Model i, and is the information criteria of Model j. g(k) In the calculation of the BIC, can be described as: where n is the volume of observation data and k is the di-θ mension of . The model is deemed acceptable with a small BIC. When the difference between models is inconspicuous, the simplest one is generally accepted.

Results
3.1 Bayesian statistical model II with the Weibull distribution as the population distribution The failure mechanism of data is always an important factor for model selection. Logarithmic normal distribution or the Weibull distribution is often used to model the failures caused by fatigue. Therefore, we established a Bayesian statistical model with a logarithmic normal distribution or a Weibull distribution as the population distribution.
3.1.1 Bayesian statistical model II Many factors clearly affect leakages in the FPSO internal turret, and the occurrence of any factor leads to oil and gas leakages. This system failure mechanism satisfies the weakest link principle of Weibull distribution, that is, the failure time of the entire system depends on the strength of the weakest link. Therefore, the distribution can be used to describe the leakage time distribution of the FPSO internal turret system.
α β As known from the probability density function of the Weibull distribution, parameters and are larger than zero. Thus, the prior distribution should satisfy the positive support domain. The dispersivity of prior information is the thickness of the tail. Robustness is the selection of a priori distribution to make the posteriori distribution less volatile. In the Bayesian statistical analysis, Gamma function and In-verseGamma function are often selected as a prior distribution with evaluation parameters of the positive support domain.
α β In the analysis process, if the correlation between parameters and is not considered, the prior density function of the parameters is α β According to Eqs. (27) and (31), the kernel of the joint posterior density function of the parameters and is The method of moments is used to determine the value of the super parameters with consideration of the priori distribution, which should have a large range. Hence, the values of the super parameters are , and .

Extraction of the posterior samples
As a result of the properties of large samples, the posterior distribution often presents good normal properties. Therefore, a normal distribution is often selected as the recommended distribution of the M-H algorithm. To ensure that the posterior point is always a positive real number, the parameters and select the candidate values under a logarithmic scale. The assumption is that is the (j-1)-th sample value. Thus, the next candidate point is . In this case, , is the given variance and (30) One of the acceptance probabilities involving the recommended density function can be simplified as: β Similarly, the candidate point of can be obtained as follows: where and the relative recommended density function is r α α * The acceptance probability of the candidate point can be obtained as: r β Similarly, the acceptance probability of the candidate β * point can be obtained as: The posterior samples satisfying the conditions are extracted according to Eqs. (2)-(3). To avoid the local convergence of the samples, we take two samples with the largest difference in initial values. The initial values are and . The first sample values are shown in Figs. 1a and 1b. α β The posteriori sample values of the parameters and vary randomly in a certain range. The samples can thus be considered as steady-state simulation samples of a certain distribution. For samples extracted for the first time, we aim to reduce the impact of the initial values by regarding the first 10000 samples as the combustion period. The same is not considered in the subsequent analysis.

Autocorrelation analysis of the samples
The first 100 correlation coefficients are calculated. The results are shown in Fig. 2. α β As the order increases, the autocorrelation coefficient decreases rapidly (Fig. 2). The autocorrelation coefficient of fluctuates around 0 from approximately the 12th order, and the autocorrelation coefficient of fluctuates around 0 from approximately the 17th order. According to the definition of the autocorrelation coefficient, the linear correlation of the samples with an interval of approximately 20 samples is small, and the samples can be considered as independent; therefore, the samples with an interval of 20 samples are used as the analysis samples.
For 5000 sample values after extraction, the correlation coefficients of the first 100 orders are calculated. The results are shown in Fig. 3.
The autocorrelation coefficient fluctuates near 0 from the first order, thereby confirming the above analysis. The 5000 samples can be regarded as completely independent, which implies that the mixing efficiency of the chain is high and that the randomness of the process can be simulated α β Fig. 1. 1.1×10 4 sample path of parameters (a) and (b). KANG Ji-chuan et al. China Ocean Eng., 2019, Vol. 33, No. 1, P. 14-25 19 well. The posterior analysis is based on this analysis sample.

Prior/posterior distribution diagram
The corresponding histograms are obtained on the basis of 5000 analysis samples, as shown in Fig. 4.
The posterior distribution of the parameters has the normality of large samples, which is consistent with the law of large numbers in the MCMC theory. According to the histograms, the kernel density estimation curves of the samples are drawn, and the curves are regarded as the posteriori distribution of the parameters. The results are shown in Figs. 5a and 5b. α β α The variation ranges of the parameters and are obviously narrow, indicating that the sample information provided by the observation data makes the distribution of the parameters increasingly concentrated and reduces the uncertainty in the parameter estimation. At the same time, the probability peak value of increases in Fig. 5a. As known from the parameter definition of the Weibull distribution, the increase of a proportion parameter makes the failure time distribution extensive. The following can be determined from Fig. 5b. (1) Being larger than 1 in a large proportion indicates that the failure rate of the events is likely to increase with time and is consistent with the prior estimate. (2) In the range smaller than 1, the value range of the posterior probability does not change significantly, and the posterior probability distribution density is smaller than the prior probability distribution density. However, this change is not obvious. Hence, the observation data cannot significantly affect the posterior distribution.
(3) In the range larger than 1, the value range of the parameters obviously decreases, and the value gradually decreases. According to the characteristics of a Weibull distribution, the changes cause the failure function values of variables to gradually decrease.

MC deviation
As noted previously, when the MC deviation is smaller than 5% of the standard deviation, the chain can be considered as convergent. According to Eq. (10), the two approaches to reduce the MC deviation is to increase the number of samples and to reduce the correlation among samples. The calculated results of the first Markov chain are given in Table 2. α β The MC deviation of parameters and is 5% smaller than the corresponding standard deviation. This result shows that the Markov chain is convergent, given the chain length and autocorrelation. However, we cannot guarantee that the chain is globally convergent. Thus, other methods are needed to ensure the global convergence of chains.

Variance ratio analysis
As noted previously, for the Markov chains generated above, the in-chain and mixed chain variances are calculated separately to analyze the convergence and mixing of the two chains. The mixed variance ratios and of parameters and , respectively, are calculated, and the results of the first 500 analysis samples are shown in Fig. 6. Figs. 6a and 6b indicate that the ratios and of the inchain and mixed chain variances increase with the number of iterations and that they tend to be equal to 1. Along with the ergodicity mean figure of the sample, this result suggests that the samples with similar means and variances can be regarded as being from random samples of the same steady-state distribution and that each sampling is globally convergent.

Model testing
Given the limited number of the observation samples of the Bayesian statistical model, we use the cross-validation method to perform the model test by examining sample outliers to analyze the degree of fit. At the same time, the posterior prediction distribution can be used to test the model from an overall perspective. Then, the overall fit between the sample and the model can be analyzed.  18) and (19). The results are shown in Table 3.
The values of are correspondingly smaller than the values of , thereby indicating that each observation sample has a certain influence on the posterior distribution and that the maximum likelihood decoding criterion is followed under the Bayesian framework. For samples with long and short failure times, the difference between and is relatively large. This result indicates that the influence of these sample values on the posterior distribution is larger than that of other sample values. Meanwhile, the difference between and is smaller than the corresponding . Thus, the model is acceptable.
3.2 Bayesian statistical model II with the logarithmic normal distribution as the population distribution

Bayesian statistical model II
The logarithmic normal distribution is often applied to  downtime models with skewness. It can also be derived from the stochastic process describing fatigue. Thus, it can be used to describe random events of oil and gas leakages.
As the value range of is the whole real number field and the normality of the posterior distribution is similar to that of the prior distribution, the normal distribution can be selected as the prior distribution. From the uncertainty analysis, we recognize the existence of a relationship between the uncertainty of the parameter d and the uncertainty of the random variable . This relationship can be reflected by the correlation analysis of the parameters. Therefore, the priori distribution considering correlation can be selected, that is, the variance of the parameter can be expressed by that of parameter , as shown in the following formulation.
σ InverseGamma The degree of this correlation can be expressed by different values of k, and the value of k is always 1. The value range of the parameter is a real number field, which has a certain degree of concentration. Thus, the distribution is often selected as the prior distribution.
(39) According to Eqs. (37)-(39), the joint posterior kernel density estimation function of the Bayesian statistical model is as follows: According to the method of moments, the value of the super parameters are determined as , .

Extraction of the posterior samples
Unlike that in other sampling methods, the candidate point can be obtained by the normal distribution while considering the normality of the posterior distribution. This method can ensure that the posterior distribution of the parameters and the recommended density function are formally similar and that the resulting sample retains good Markov properties.
A part of the acceptance probabilities involving the recommended density probabilities can be simplified as: The candidate point is still obtained under a logarithmic scale. According to Eqs. (33)-(36), the acceptance probability of the relative point is The first 100 correlation coefficients of the analysis samples of and are calculated. As seen in Fig. 9, the autocorrelation coefficient fluctuates around 0, indicating that the analysis samples have good mixing for the subsequent analysis.

Prior/posterior distribution diagram
The corresponding histograms are obtained on the basis of 5000 analysis samples, as shown in Fig. 10. μ σ According to the histograms of and , the kernel density estimation curves of the analysis samples are drawn and regarded as the posteriori distribution of the parameters. μ In Fig. 11, the posterior values of the parameter have obvious symmetry and normality, and the range of the values decreases obviously compared with those of the prior distribution. The value to which the posterior probability peak corresponds is smaller than that of the prior distribution. This result indicates that in the logarithmic normal distribution model, the failure time interval of oil and gas leakages decreases. The value to which the posterior probability σ μ σ peak of corresponds is larger than that of the prior distribution. This result shows that the distribution of the failure time interval is wide and that its uncertainty increases. The posterior distribution of the parameters and is more concentrated than the prior distribution, indicating that the uncertainty of the distribution decreases.

MC deviation μ σ
The MC deviations and standard deviations of the first Markov chain of the parameters and are calculated. The results are shown in Table 4. μ σ The MC deviations of the parameters and are smaller than 5% of the standard deviation, indicating that the μ σ Fig. 9. First 100 correlation coefficients of (a) and (b) for the analysis samples. length of a single chain and the correlation among the samples satisfy the convergence of the Markov chains.

Variance ratio analysis
Considering the two Markov chains with different initial values and numbers of the analysis samples , we calculate the ratio between the mean chain variance and the mixed chain variance of the parameters and ( and ). The results are shown in Fig. 12. Fig. 12, and tend to rapidly approach 1 from the direction that is larger than 1. Hence, the analytical samples of the two chains can be regarded approximately from the same steady-state distribution, that is, the analysis samples of different Markov chains are well mixed.  Table 5. have similar values, which indicate a good model fit. The values of are correspondingly smaller than those of , and the differences decrease with the increase of the sample values. This result implies that the effect of small sample values on the model parameters is larger than that of large sample values. Therefore, focus should be directed toward the samples with low failure time in the statistical analysis.

Selection of the models and reliability analysis
According to Section 2.3.3, the BIC value and AIC value of the Weibull distribution and logarithmic normal distribution are calculated separately. The results are shown in Table 6.
The BIC value and AIC value of the Weibull distribution are smaller than those of the logarithmic normal distribution, meaning that the Weibull distribution is more advantageous than the logarithmic normal distribution when selecting the failure time distribution model. Hence, the Bayesian model with a Weibull population distribution is chosen as the failure time distribution model.
As noted previously, the posterior mean is chosen as the estimated value of the parameters α and β. The reliability function distribution of oil and gas leakages in the FPSO internal turret is presented in Fig. 13.
In the early operation of the internal turret, the reliability value is high, and the possibility of oil and gas leakages is low. Over time, the reliability value decreases gradually to 0 until 2600 days.
The distribution of the posterior mean failure time for determining the occurrence of a leakage according to the posterior distribution of the parameters α and β is shown in Fig. 14. It can be concluded that the distribution of the posterior average failure time is with certain normality and that the posterior peak values become concentrated at about 900 days. The figure also illustrates that the value has a low degree of the uncertainty in the posterior estimation.

Conclusions
In this study, a reliability statistical analysis method based on the Bayesian theory is proposed to solve the probability calculation in the quantitative risk assessment, especially for small sample failure events for which failure mechanism models cannot be established. The presented method can integrate prior information to reasoning and can effectively reduce the uncertainty in the reliability analysis.
A statistical model is established for small sample events of oil and gas leakages in FPSO internal turret systems by us- ing the Bayesian statistical method. The model is calculated and analyzed, and the results are summarized as follows.
The two-parameter distribution Weibull(α, β) and the logarithmic normal distribution Lognormal(μ, σ) are used as the population distribution, and the Bayesian statistical model is established for oil and gas leakage events. The prior distribution of the parameters to be estimated is determined according to the selection method of the prior distribution with information: the parameter α of the Weibull distribution is subject to Gamma(3,400), and the parameter β is subject to InverseGamma (4, 8); the parameter μ of the logarithmic normal distribution is subject to N (6.5, σ), and the parameter σ is subject to InverseGamma (4, 8).
The two Bayesian statistical models are calculated, and the optimal model is selected according to the information criterion, that is, the BIC and AIC. The M-H algorithm is used to calculate the posterior distribution of the parameters to be estimated. According to the convergence diagnostic criteria and the model test results, the two Bayesian statistical models are both acceptable. Furthermore, the Bayesian model with the population distribution of Weibull(α, β) is more advantageous than that with a logarithmic normal distribution in the reliability analysis of oil and gas leakage events according to the BIC and AIC. The MTTF of the internal turret leakage is 900 days. The calculated results could provide suggestions for the operation and maintenance of FPSOs.  KANG Ji-chuan et al. China Ocean Eng., 2019, Vol. 33, No. 1, P. 14-25