OPENMP SPEEDUP FOR MULTI-OBJECTIVE THERMAL GENERATION SCHEDULING
The day-ahead thermal generation scheduling comprises of two tasks: one is the unit commitment, which determines the on/off schedules of thermal generators; the other is the power dispatch which distributes the system load demand to the committed generators. The optimal thermal generation scheduling requires effective performance of the above two tasks to meet the forecasted load demand over a particular time horizon, satisfying a large set of operating constraints and meeting certain objectives. Generally the only objective of the generation scheduling is to minimize the system operation cost and the problem is known as the classical Unit Commitment Problem (UCP).
In a real-world situation, the generation scheduling takes place in an uncertain environment with thermal generation outage and deviations from forecasted load being the most likely uncertainties. Thus, reliability of the power system is a very important aspect which cannot be neglected. Therefore, a study was undertaken to address thermal generation scheduling as a multi-objective optimization problem in an uncertain environment with system operation cost and expected energy not served cost (reliability cost) as the multiple conflicting objectives. The uncertainties occurring due to thermal unit outage and load forecast error are incorporated using expected energy not served reliability index. A multi-objective genetic algorithm was developed to solve the aforementioned scheduling problem.
In the proposed genetic algorithm, the chromosome is a N×Tmax array where N is the number of generators and Tmax is the scheduling horizon. Two case studies involving two different systems are carried out in this work. In the case study 1, the test system is a 20-unit system and the scheduling horizon is of 24 hours. Thus, N = 20 and Tmax = 24. Further, since genetic algorithm works on a population of solutions and evolves them over generations/iterations; the population size and generation number is set (after experiments) as 300 and 40,000. In the case study 2, the test system is a 60-unit system and the scheduling horizon is of 24 hours. Thus, N = 20 and Tmax = 24. The population size and generation number remain the same for 60 unit system as well.
The multi-objective generation scheduling using genetic algorithm with reliability cost as one of the objectives is a very computationally intensive problem because the algorithm is required to evaluate reliability cost for every chromosome and at every hour; and evaluating reliability cost is quite computationally expensive. Furthermore, as there are 300 chromosomes in a population and 24 hours in the scheduling horizon and a total of 40,000 generations, the reliability cost is to be evaluated 300 x 24 x 40,000 i.e., 288,000,000 times which is an enormous number.
Since, in the multi-objective generation scheduling problem using genetic algorithm, the reliability cost evaluation for every chromosome is the bottleneck, and the reliability cost evaluation for every hour is an independent function; the problem is perfectly suited for implementing parallel computing.
In this work, parallel computing using OpenMP paradigm is implemented and the codes are run on the NUS HPC cluster. However, for gaining insights on the complete speed-up achieved, the codes for the aforementioned two case studies are run for following cases – a) execution on personal computer (PC) using serial computing, b) execution on cluster using serial computing; and execution on cluster using different treads – c) 2, d) 4, e) 6, f) 8 and g) 12.
The configuration of the PC is – Intel® Core™ i5-2400 CPU @ 3.10 GHz 4 GB RAM.
In addition, for cluster computing to have a fair comparison, the codes are run on the same node of same cluster. tiger2-c9 node of the atlas7cluster which has the following specifications – 2 Xeon E5-2630 CPU @ 2.30GHz 48 GB RAM is chosen.
Table 1 and 2 summarize the results for all the cases for 20 unit system and 60 unit system, respectively. The following are the observations from Tables 1 and 2. It is noted that the speed-up calculated is considering case a (i.e., serial code run-time on PC) as the base case.
1) In both the test systems, there is approx. 2.5-3.0 times speed-up when the code is run serially on cluster instead of the PC.
2) As the number of threads increase, the run time decreases considerably and there is significant speed-up when compared with the base case.
3) The speed-up obtained for 60 unit test system is higher as compared to the 20 unit test system because the reliability cost evaluation is higher for a larger size test system. Thus, using parallel computing, the effect in reduction of computational time observed is much more significant in the 60 unit test system.
Table 1
20 Unit System
Case | Personal Computer | ||
---|---|---|---|
Run time (s) | |||
a | Serial | 5,680 | |
Cluster | |||
Threads | Run time (s) | Speed-Up | |
b | 0 (Serial) | 2,400 | 2.4 |
c | 2 | 1,355 | 4.2 |
d | 4 | 1,143 | 5.0 |
e | 6 | 1,101 | 5.2 |
f | 8 | 1,034 | 5.5 |
g | 12 | 1,018 | 5.6 |
Table 2
60 Unit System
Case | Personal Computer | ||
---|---|---|---|
Run time (s) | |||
a | Serial | 33,600 | |
Cluster | |||
Threads | Run time (s) | Speed-Up | |
b | 0 (Serial) | 11,240 | 3.0 |
c | 2 | 4,160 | 8.1 |
d | 4 | 3,444 | 9.8 |
e | 6 | 3,166 | 10.6 |
f | 8 | 2,865 | 11.7 |
g | 12 | 2,727 | 12.3 |
Note of thanks
I would like to sincerely thank the NUS HPC team – especially Mr. Yeo Eng Hee who took a lot of interest in introducing me to the OpenMP parallel computing paradigm, Mr. Srikanth Gumma who patiently and enthusiastically answered to my numerous queries over the last 2 years and Mr. Zhang Xinhuai who provided me with the opportunity to write an article for the NUS HPC team. Also, I would like to thank rest of the NUS HPC team including Mr. Wang Junhong and Mr. Gowri Shankar.
I would also like to thank my PhD supervisor, Prof. Dipti Srinivasan, who allowed me ample time to explore the parallel computing benefits for my research problems.