TY - JOUR
T1 - GPU-accelerated steady-state computation of large probabilistic Boolean networks
AU - Mizera, Andrzej
AU - Pang, Jun
AU - Yuan, Qixia
N1 - A preliminary version of this work was presented at the 2nd International Symposium onDependable Software Engineering: Theories, Tools,
and Applications [MPY16c]
Funding Information:
We would like to thank the anonymous referees who read carefully the previous versions of this paper and gave a lot of valuable comments. Those comments helped us to greatly improve the quality of our paper both in content and presentation. Qixia Yuan was supported by the National Research Fund, Luxembourg (Grant 7814267). Jun Pang was partially supported by the project SEC-PBN (funded by the University of Luxembourg) and the ANR-FNR Project AlgoReCell (INTER/ANR/15/11191283). Andrzej Mizera contributed to this work while holding a postdoctoral researcher position at the Computer Science and Communications Research Unit, University of Luxembourg.
Publisher Copyright:
© 2018, British Computer Society.
PY - 2019/2/12
Y1 - 2019/2/12
N2 - Computation of steady-state probabilities is an important aspect of analysing biological systems modelled as probabilistic Boolean networks (PBNs). For small PBNs, efficient numerical methods to compute steady-state probabilities of PBNs exist, based on the Markov chain state-transition matrix. However, for large PBNs, numerical methods suffer from the state-space explosion problem since the state-space size is exponential in the number of nodes in a PBN. In fact, the use of statistical methods and Monte Carlo methods remain the only feasible approach to address the problem for large PBNs. Such methods usually rely on long simulations of a PBN. Since slow simulation can impede the analysis, the efficiency of the simulation procedure becomes critical. Intuitively, parallelising the simulation process is the ideal way to accelerate the computation. Recent developments of general purpose graphics processing units (GPUs) provide possibilities to massively parallelise the simulation process. In this work, we propose a trajectory-level parallelisation framework to accelerate the computation of steady-state probabilities in large PBNs with the use of GPUs. To maximise the computation efficiency on a GPU, we develop a dynamical data arrangement mechanism for handling different size PBNs with a GPU. Specially, we propose a reorder-and-split method to handle both large and dense PBNs. Besides, we develop a specific way of storing predictor functions of a PBN and the state of the PBN in the GPU memory. Moreover, we introduce a strongly connected component (SCC)-based network reduction technique to further accelerate the computation speed. Experimental results show that our GPU-based parallelisation gains approximately a 600-fold speedup for a real-life PBN compared to the state-of-the-art sequential method.
AB - Computation of steady-state probabilities is an important aspect of analysing biological systems modelled as probabilistic Boolean networks (PBNs). For small PBNs, efficient numerical methods to compute steady-state probabilities of PBNs exist, based on the Markov chain state-transition matrix. However, for large PBNs, numerical methods suffer from the state-space explosion problem since the state-space size is exponential in the number of nodes in a PBN. In fact, the use of statistical methods and Monte Carlo methods remain the only feasible approach to address the problem for large PBNs. Such methods usually rely on long simulations of a PBN. Since slow simulation can impede the analysis, the efficiency of the simulation procedure becomes critical. Intuitively, parallelising the simulation process is the ideal way to accelerate the computation. Recent developments of general purpose graphics processing units (GPUs) provide possibilities to massively parallelise the simulation process. In this work, we propose a trajectory-level parallelisation framework to accelerate the computation of steady-state probabilities in large PBNs with the use of GPUs. To maximise the computation efficiency on a GPU, we develop a dynamical data arrangement mechanism for handling different size PBNs with a GPU. Specially, we propose a reorder-and-split method to handle both large and dense PBNs. Besides, we develop a specific way of storing predictor functions of a PBN and the state of the PBN in the GPU memory. Moreover, we introduce a strongly connected component (SCC)-based network reduction technique to further accelerate the computation speed. Experimental results show that our GPU-based parallelisation gains approximately a 600-fold speedup for a real-life PBN compared to the state-of-the-art sequential method.
KW - Biological networks
KW - Computational modelling
KW - Discrete-time Markov chains
KW - Graphics processing unit (GPU)
KW - Probabilistic Boolean networks
KW - Simulation
KW - Statistical methods
UR - http://www.scopus.com/inward/record.url?scp=85054536367&partnerID=8YFLogxK
UR - https://orbilu.uni.lu/handle/10993/39098
U2 - 10.1007/s00165-018-0470-6
DO - 10.1007/s00165-018-0470-6
M3 - Article
AN - SCOPUS:85054536367
SN - 0934-5043
VL - 31
SP - 27
EP - 46
JO - Formal Aspects of Computing
JF - Formal Aspects of Computing
IS - 1
ER -