Abstract
In this work, an embedded system working model is designed with one server that receives requests by a requester by a service queue that is monitored by a Power Manager (PM). A novel approach is presented based on reinforcement learning to predict the best policy amidst existing DPM policies and deterministic markovian nonstationary policies (DMNSP). We apply reinforcement learning, namely a computational approach to understanding and automating goal-directed learning that supports different devices according to their DPM. Reinforcement learning uses a formal framework defining the interaction between agent and environment in terms of states, response action, and reward points. The capability of this approach is demonstrated by an event-driven simulator designed using Java with a power-manageable machine-to-machine device. Our experiment result shows that the proposed dynamic power management with timeout policy gives average power saving from 4% to 21% and the novel dynamic power management with DMNSP gives average power saving from 10% to 28% more than already proposed DPM policies.
- A. Ahmad, S. Jabbar, A. Paul, and S. Rho. 2014. Mobility aware energy efficient congestion control in wireless sensor network. Int. J. Distrib. Sensor Netw. 2014.Google Scholar
- R. I. Bahar, G. Albera, and S. Manne. 1998. Power and performance tradeoffs using various caching strategies. In Proceedings of the International Symposium on Low Power Electronics and Design (ISLPED'98). 64--69. Google ScholarDigital Library
- R. S. Bajwa, M. Hiraki, H. Kojima, D. J. Gorny, K. Nitta, A. Shridhar, K. Seki, and K. Sasaki. 1997. Instruction buffering to reduce power in processors for signal processing. IEEE Trans. VLSI Syst. 5, 4, 417--424. Google ScholarDigital Library
- N. Bambos. 1998. Toward power-sensitive network architectures in wireless ommunications. IEEE Person. Comm. 5, 3, 50--58.Google ScholarCross Ref
- L. Benini and G. De Micheli. 1997. Dynamic Power Management: Design Techniques and CAD Tools. Kluwer Academic, Dordrecht, Netherlands. Google ScholarDigital Library
- L. Benini, A. Bogliolo, and G. De Micheli. 2000. A survey of design technique for system level dynamic power management. IEEE Trans. VLSI Syst. 8, 3. Google ScholarDigital Library
- L. Benini, R. Hodgson, and P. Siegel. 1998. System-level power estimation and optimization. In Proceedings of the International Symposium on Low Power Electronics and Design. 173--178. Google ScholarDigital Library
- L. Benini, G. Paleologo, A. Bogliolo, and G. De Micheli. 1999. Policy optimization for dynamic power management. IEEE Trans. Comput.-Aid. Des. 18, 6, 813--833. Google ScholarDigital Library
- E. Chung, L. Benini, and G. De Micheli. 1999a. Dynamic power management using adaptive learning trees. In Proceedings of the International Conference on Computer-Aided Design. 274--279. Google ScholarDigital Library
- E. Chung, L. Benini, and G. De Micheli. 1999b. Dynamic power management for nonstationary service requests. In Proceedings of the Design, Automation, and Test in Europe Conference. 77--81. Google ScholarDigital Library
- R. Golding, P. Bosch, and J. Wilkes. 1996. Idleness is not sloth. Tech. rep. HPL-96-140, HP Laboratories.Google Scholar
- S. Irani, S. Shukla, and R. Gupta. 2002. Competitive analysis of dynamic power management strategies for systems with multiple power saving state. In Proceedings of the Design, Automation, and Test in Europe Conference. 117. Google ScholarDigital Library
- Y. Lu, T. Simunic, and G. De Micheli. 1999. Software controlled power management. In Proceedings of the 7th International Workshop on Hardware/Software Codesign. 157--161. Google ScholarDigital Library
- Y. Lu, E. Chung, T. Simunic, L. Benini, and G. De Micheli. 2000. Quantitative comparison of pm algorithms. In Proceedings of the Design, Automation, and Test in Europe Conference. 20--26. Google ScholarDigital Library
- A. Paul. 2013a. Dynamic power management for ubiquitous network devices. Adv. Sci. Lett. 19, 7, 2046--2049.Google ScholarCross Ref
- A. Paul. 2013b. Graph based m2m optimization in an iot environment. In Proceedings of the ACM Research in Adaptive Convergent Systems Conference. 45--46. Google ScholarDigital Library
- A. Paul, Y. C. Jiang, and J. F. Wang. 2010. Computation aware scheme for visual signal processing. J. Softw. 5, 6, 573--578.Google ScholarCross Ref
- A. Paul, Y. C. Jiang, J. F. Wang, and J. F. Yang. 2012. Parallel reconfigurable computing based mapping algorithm for motion estimation in advanced video coding. ACM Trans. Embed. Comput. Syst. 11, S2. Google ScholarDigital Library
- Z. Ren, B. H. Krogh, and R. Marculescu. 2005. Hierarchical adaptive dynamic power management IEEE Trans. Comput. 54, 4, 409--420. Google ScholarDigital Library
- C. H. C. Ribeiro. 2002. A tutorial on reinforcement learning techniques. http://www.ppgia.pucpr.br/∼fabricio/ftp/Aulas/Mestrado/AS/Artigos-Apresentacoes/Aprendizagem%20por%20Reforco/TAIC-tutorial_RL.pdf.Google Scholar
- E. Shih, V. Bahl, and M. Sinclair. 2003. Reducing energy consumption of wireless, mobile devices using a secondary low-power channel. Tech. rep., MIT Laboratory for Computer Science. March.Google Scholar
- S. Shukla and R. Gupta. 2001. A model checking approach to evaluating system level power management for embedded systems. In Proceedings of the IEEE Workshop on High Level Design Validation and Test (HLDVT'01). 53. Google ScholarDigital Library
- T. Simunic. 2000. Dynamic management of power consumption. In Power Aware Computing, Springer, 101--125. Google ScholarDigital Library
- T. Simunic, L. Benini, and G. De Micheli. 2000. Power management of laptop hard disk. In Proceedings of the Design, Automation, and Test in Europe Conference. 736. Google ScholarDigital Library
- T. Simunic, L. Benini, and G. De Micheli. 1999. Event-driven power management. In Proceedings of the International Symposium on System Synthesis. 18--23. Google ScholarDigital Library
- A. Singaravelu and S. Sivasubramanian. 2012. Probability and Queueing Theory by Singaravelu 19th Ed. Meenakshi Agency.Google Scholar
- A. Sinha and A. Chandrakasan. 2001. Dynamic power management in wireless sensor networks. IEEE Des. Test Comput. 18, 2, 62--74. Google ScholarDigital Library
- R. S. Sutton and A. G. Barto. 1998. Reinforcement Learning -- An Introduction. MIT Press, Cambridge, MA. Google ScholarDigital Library
- K. C. Veluvolu and Y. C. Soh. 2011. Fault reconstruction and state estimation with sliding mode observers for lipchitz nonlinear systems. IET Proc. Control Theory Appl. 5, 11, 1255--1263.Google ScholarCross Ref
- L. Zhong and N. K. Jha. 2004. Dynamic power optimization of interactive systems. In Proceedings of the IEEE International Conference on VLSI Design. 1041--1047. Google ScholarDigital Library
- L. Zhong and N. K. Jha. 2006. Dynamic power optimization targeting user delays in interactive systems. IEEE Trans. Mobile Comput. 5, 11, 1473--1488. Google ScholarDigital Library
Index Terms
- Real-Time Power Management for Embedded M2M Using Intelligent Learning Methods
Recommendations
Hybrid power management in real time embedded systems: an interplay of DVFS and DPM techniques
Energy-aware scheduling of real time applications over multiprocessor systems is considered in this paper. Early research reports that while various energy-saving policies, for instance Dynamic Power Management (DPM) and Dynamic Voltage & Frequency ...
Deriving a near-optimal power management policy using model-free reinforcement learning and Bayesian classification
DAC '11: Proceedings of the 48th Design Automation ConferenceTo cope with the variations and uncertainties that emanate from hardware and application characteristics, dynamic power management (DPM) frameworks must be able to learn about the system inputs and environment and adjust the power management policy on ...
Reinforcement learning based dynamic power management with a hybrid power supply
ICCD '12: Proceedings of the 2012 IEEE 30th International Conference on Computer Design (ICCD 2012)Dynamic power management (DPM) in battery-powered mobile systems attempts to achieve higher energy efficiency by selectively setting idle components to a sleep state. However, re-activating these components at a later time consumes a large amount of ...
Comments