Optimizing compilers through parallel processors and memory performance observing as combined approach

Authors

  • Dr Sandeep Kulkarni Post-doctoral scholor, Computer Science Department, Sangum University, Bilwara, Rajasthan, India Author
  • Dr. K P Yadav Vice Chancellor, Computer Science Department, Sangum University, Bilwara ,Rajasthan, India Author

DOI:

https://doi.org/10.61841/cb234e41

Keywords:

Memory performance, Parallelism, Matrix, Permutation, Scope Management, Unified

Abstract

Phases of compilers for tokenizing the input using lexical analysis and regular expressions. Abstract syntax tree in the form of parsing... Abstract might go to an error state if it has more than one input. Hence, automata uses the phases of a compiler with the help of algorithms, which are mathematical. Processing of source code, which is human-readable, to machine-readable code, which is translated at the time of runtime. While translating, code should be readable, which is done with the help of compilers and interpreters. Therefore, it requires less memory because there is no specific code for the platform. Taking string input as symbols changes state as per instructions and is called finite automata. It uses regular expressions. It recognizes regular expressions. After processing all the states according to instruction, it reaches the final state, and it is known as the accepted state. If it is self-compiling, this kind of compiler in any programming language is known as bootstrapping. Using very little part of language, we could generate a bootstrap compiler in many programming languages. For example, languages like Pascal, Haskell, C, OCaml, Java, etc., use bootstrap compilers. Features containing discrete properties in mathematics like calculus and algebra that includes set theory , matrix, and so on. Before runtime occurs in a programming language, some interpretation happens in some languages; that is, translation occurs. Interpreted code can be executed without the help of machine code. It can run on many operating systems. Optimization is good because they are interpreted as soon as they are interpreted. One of the problems is the inefficiency of compiled programming languages. Smalltalk is a programming language that has been known for being very productive for many years. Language complexity is considered seriously. Nowadays we are using Swift for reducing the complexity of the language. 

Downloads

Download data is not yet available.

References

[1] M. O'Boyle and P. Knijnenburg, Non-singular data transformations: definition, validity, applications, in ``Proc. 6th Workshop on Compilers for Parallel Computers,''pp. 287-297, 2003.

[2] D. Palermo and P. Banerjee, Automatic selection of dynamic data partitioning schemes for distributedmemory multicomputers, in ``Proc. 8th Workshop on Languages and Compilers for Parallel Computing, Columbus,'' pp. 392_406 -2003.

[3] C. Polychronopoulos, M. B. Girkar, M. R. Haghighat, C. L. Lee, B. P. Leung, and D. A. Schouten, Parafrase-2: an environment for parallelizing, partitioning, synchronizing, and scheduling programs on multiprocessors, in ``Proc. the International Conference on Parallel Processing, St. Charles, IL,'' pp. 39_48 - 2000.

[4] J. Ramanujam, Non-unimodular transformations of nested loops, in ``Proc. Supercomputing 92, Minneapolis, MN,' pp. 214-223, 2000.

[5] J. Ramanujam and A. Narayan, Integrating data distribution and loop transformations for distributed memory machines, in ``Proc. 7th SIAM Conference on Parallel Processing for Scientific Computing - 2015.

[6] J. Ramanujam and A. Narayan, Automatic data mapping and program transformations, in ``Proc. Workshop on Automatic Data Layout and Performance Prediction, Houston, TX, 2018.''

[7] V. Sarkar, G. R. Gao, and S. Han, Locality analysis for distributed shared-memory multiprocessors, in ``Proc. the Ninth International Workshop on Languages and Compilers for Parallel Computing, Santa Clara, CA'' - 2019.

Downloads

Published

31.07.2020

How to Cite

Kulkarni, S., & Yadav, K. P. (2020). Optimizing compilers through parallel processors and memory performance observing as combined approach. International Journal of Psychosocial Rehabilitation, 24(5), 7893-7898. https://doi.org/10.61841/cb234e41