skip to main content
10.1145/3130265.3130323acmconferencesArticle/Chapter ViewAbstractPublication PagesesweekConference Proceedingsconference-collections
research-article

Executable dataflow benchmark generation technique for multi-core embedded systems

Published:19 October 2017Publication History

ABSTRACT

As the complexity of multi-core embedded systems continuously grows, the optimization and verification of such systems become non-trivial. Thus, it is important to secure a set of benchmarks of reasonable complexity to validate the design of multi-core embedded systems. Dataflow model has long been considered as a suitable model-of-computation for specifying the behavior of embedded systems. In this paper, we proposes a dataflow benchmark generation technique for multi-core embedded systems, leveraging two existing tools: a random dataflow topology generator and a random C code generator. In the proposed technique, as a preparatory step, a C code database is established by means of a random C code generation tool. Then, a random dataflow graph, with execution time information annotated to each node, is generated by an existing tool. For each node in the generated graph, a number of randomly generated C code segments are properly chosen and accommodated in a single function as per the given execution time information. In doing so, a set of linear equations are derived and solved. Subsequently, using existing model-based embedded system design frameworks, we automatically generate an executable benchmark for the entire dataflow graph. Further, in order to enhance the accuracy of the generated code, a simple calibration technique is applied after the generation and test runs. It is shown that the generated codes assure the diversity and complexity as embedded software benchmark for multi-core embedded systems.

References

  1. Robert P Dick, David L Rhodes, and Wayne Wolf. 1998. TGFF: task graphs for free. In Proceedings of the 6th international workshop on Hardware/software codesign. IEEE Computer Society, 97--101. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Sander Stuijk, Marc Geilen, and Twan Basten. 2006. SDF*** 3: SDF for free. In Application of Concurrency to System Design, 2006. ACSD 2006. Sixth International Conference on. IEEE, 276--278. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Steven Cameron Woo, Moriyoshi Ohara, Evan Torrie, Jaswinder Pal Singh, and Anoop Gupta. 1995. The SPLASH-2 programs: Characterization and methodological considerations. In Computer Architecture, 1995. Proceedings., 22nd Annual International Symposium on. IEEE, 24--36. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Christian Bienia, Sanjeev Kumar, Jaswinder Pal Singh, and Kai Li. 2008. The PARSEC benchmark suite: Characterization and architectural implications. In Proceedings of the 17th international conference on Parallel architectures and compilation techniques. ACM, 72--81. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Amit Kumar Singh, Muhammad Shafique, Akash Kumar, and Jörg Henkel. 2013. Mapping on multi/many-core systems: survey of current and emerging trends. In Proceedings of the 50th Annual Design Automation Conference. ACM, 1. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Edward A Lee and David G Messerschmitt. 1987. Synchronous data flow. Proc. IEEE 75, 9 (1987), 1235--1245.Google ScholarGoogle ScholarCross RefCross Ref
  7. KAHN Gilles. 1974. The semantics of a simple language for parallel programming. In Information Processing 74 (1974), 471--475.Google ScholarGoogle Scholar
  8. Kaivalya M Dixit. 1991. The SPEC benchmarks. Parallel computing 17, 10--11 (1991), 1195--1209. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. SPEC CPU2006. 2006. Standard Performance Evaluation Corporation. (2006).Google ScholarGoogle Scholar
  10. Matthew R Guthaus, Jeffrey S Ringenberg, Dan Ernst, Todd M Austin, Trevor Mudge, and Richard B Brown. 2001. MiBench: A free, commercially representative embedded benchmark suite. In Workload Characterization, 2001. WWC-4. 2001 IEEE International Workshop on. IEEE, 3--14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Jason A Poovey, Thomas M Conte, Markus Levy, and Shay Gal-On. 2009. A benchmark characterization of the EEMBC benchmark suite. IEEE micro 29, 5 (2009). Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Amir Hossein Ghamarian, Sander Stuijk, Twan Basten, MCW Geilen, and Bart D Theelen. 2007. Latency minimization for synchronous data flow graphs. In Digital System Design Architectures, Methods and Tools, 2007. DSD 2007. 10th Euromicro Conference on. IEEE, 189--196. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Sander Stuijk, Twan Basten, MCW Geilen, and Henk Corporaal. 2007. Multi-processor resource allocation for throughput-constrained synchronous dataflow graphs. In Proceedings of the 44th annual Design Automation Conference. ACM, 777--782. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Lothar Thiele and Nikolay Stoimenov. 2009. Modular performance analysis of cyclic dataflow graphs. In Proceedings of the seventh ACM international conference on Embedded software. ACM, 127--136. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Shuvra S Bhattacharyya, Praveen K Murthy, and Edward A Lee. 1999. Synthesis of embedded software from synchronous dataflow specifications. The Journal of VLSI Signal Processing 21, 2 (1999), 151--166. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Harold J Curnow and Brian A Wichmann. 1976. A synthetic benchmark. Comput. J. 19, 1 (1976), 43--49.Google ScholarGoogle ScholarCross RefCross Ref
  17. Reinhold P Weicker. 1984. Dhrystone: a synthetic systems programming benchmark. Commun. ACM 27, 10 (1984), 1013--1030. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Shuai Che, Michael Boyer, Jiayuan Meng, David Tarjan, Jeremy W Sheaffer, Sang-Ha Lee, and Kevin Skadron. 2009. Rodinia: A benchmark suite for heterogeneous computing. In Workload Characterization, 2009. IISWC 2009. IEEE International Symposium on. Ieee, 44--54. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Xuejun Yang, Yang Chen, Eric Eide, and John Regehr. 2011. Finding and understanding bugs in C compilers. In ACM SIGPLAN Notices, Vol. 46. ACM, 283--294. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Yorick De Bock, Sebastian Altmeyer, Jan Broeckhove, and Peter Hellinckx. 2016. Task-Set Generator for Schedulability Analysis using the TACLeBench benchmark suite.. In EWiLi.Google ScholarGoogle Scholar
  21. Lars Schor, Iuliana Bacivarov, Devendra Rai, Hoeseok Yang, Shin-Haeng Kang, and Lothar Thiele. 2012. Scenario-based design flow for mapping streaming applications onto on-chip many-core systems. In Proceedings of the 2012 international conference on Compilers, architectures and synthesis for embedded systems. ACM, 71--80. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Hanwoong Jung, Chanhee Lee, Shin-Haeng Kang, Sungchan Kim, Hyunok Oh, and Soonhoi Ha. 2014. Dynamic behavior specification and dynamic mapping for real-time embedded systems: Hopes approach. ACM Transactions on Embedded Computing Systems (TECS) 13, 4s (2014), 135. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Executable dataflow benchmark generation technique for multi-core embedded systems

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      RSP '17: Proceedings of the 28th International Symposium on Rapid System Prototyping: Shortening the Path from Specification to Prototype
      October 2017
      110 pages
      ISBN:9781450354189
      DOI:10.1145/3130265

      Copyright © 2017 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 19 October 2017

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Upcoming Conference

      ESWEEK '24
      Twentieth Embedded Systems Week
      September 29 - October 4, 2024
      Raleigh , NC , USA
    • Article Metrics

      • Downloads (Last 12 months)1
      • Downloads (Last 6 weeks)0

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader