<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Programming Archives - ITEC4B</title>
	<atom:link href="https://itec4b.com/category/computer/programming/feed/" rel="self" type="application/rss+xml" />
	<link>https://itec4b.com/category/computer/programming/</link>
	<description>Information Technology Expert Consulting</description>
	<lastBuildDate>Fri, 03 Feb 2023 15:14:05 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.1.3</generator>
	<item>
		<title>C/C++ Compiler Operations</title>
		<link>https://itec4b.com/c-compiler-operations/</link>
		
		<dc:creator><![CDATA[author]]></dc:creator>
		<pubDate>Mon, 30 Jan 2023 13:36:52 +0000</pubDate>
				<category><![CDATA[Programming]]></category>
		<category><![CDATA[C]]></category>
		<category><![CDATA[C++]]></category>
		<category><![CDATA[programming]]></category>
		<guid isPermaLink="false">https://itec4b.com/?p=643</guid>

					<description><![CDATA[Sources : Delroy A. Brinkerhoff : Object-Oriented Programming using C++Brian Gough, Richard M. Stallman : An Introduction to GCC The process of translating source code into an executable program is called &#8220;compiling the program&#8221; or just &#8220;compiling&#8221;.We usually view the compilation process as a single action and generally refer to it as such.Nevertheless, a modern &#8230; <p class="link-more"><a href="https://itec4b.com/c-compiler-operations/" class="more-link">Read more<span class="screen-reader-text"> "C/C++ Compiler Operations"</span></a></p>]]></description>
										<content:encoded><![CDATA[
<p>Sources : <br><span style="text-decoration: underline;">Delroy A. Brinkerhoff : Object-Oriented Programming using C++</span><br><span style="text-decoration: underline;">Brian Gough, Richard M. Stallman : An Introduction to GCC</span></p>



<p><br><span style="text-decoration: underline;"><strong>The process of translating source code into an executable program is called &#8220;compiling the program&#8221; or just &#8220;compiling&#8221;</strong></span>.<br>We usually view the compilation process as a single action and generally refer to it as such.<br>Nevertheless, a <span style="text-decoration: underline;"><strong>modern compiler actually consists of 4 separate programs</strong></span>:</p>



<pre class="wp-block-code"><code>- <strong>Preprocessor</strong>
  Expand macros and included header files

- <strong>Compiler</strong>
  Convert source code to assembly language

- <strong>Assembler</strong>
  Convert assembly language to machine code

- <strong>Linker</strong>
  Link object files and binary libraries, Create the final executable</code></pre>



<p>So here is the process :</p>



<p class="has-text-color" style="color:#0a58ca"><span style="text-decoration: underline;"><strong>Source Code &gt; Preprocessor &gt; Compiler &gt; Assembler &gt; Linker &gt; Executable Program</strong></span></p>



<p>A single program usually consist of multiple source code files.<br>It is both awkward and inconvenient to deal with large programs in a single source code file, and spreading them over multiple files has many advantages:</p>



<p>1. It breaks large, complex programs into smaller, independent conceptual units<br>Easier to understand, follow and maintain.<br><br>2. It allows multiple programmers to work on a single program at the same time<br>Each programmer works on a separate set of files.<br><br>3. It may speed up compilation (depending on the compiler system options used)<br>The compiler system stores the generated machine code in an object file, one object file for each source code file. The compiler system may not delete the object files, so if the source code file is unchanged, the linker uses the existing object code file.<br><br>4. It permits related programs to share files<br>For example, office suites often include a word processor, a slide show editor, and a spreadsheet.<br>By maintaining the User Interface code in one shared file, they can present a consistent User Interface.<br><br>5. Although less important, it allows software developers to market software as object code organized as (binary black box) libraries, which is useful when supplying code that interfaces with applications.</p>



<p></p>



<h2>Preprocessor</h2>



<p><strong><span style="text-decoration: underline;">The Preprocessor takes the source code, removes the comments, includes headers, and replaces macros</span></strong>.<br><br>The preprocessor handles statements or lines of code that begin with the &#8220;#&#8221; character, which are called &#8220;<strong>preprocessor directives</strong>&#8220;.<br><br>Note that directives are not C/C++ statements (and therefore do not end with a semicolon) but rather instruct the preprocessor to carry out some action.<br><br><span style="text-decoration: underline;">For each .c/.cpp file, the preprocessor handles directives that begin with the # character and creates a temporary file to store its output</span>.<br>The preprocessor reads and processes each file one at a time from top to bottom.<br><span style="text-decoration: underline;"><strong>It does not change the content of any of the source files it processes</strong></span>.<br><br><span style="text-decoration: underline;">The results are files which contains the source code merged with headers files and with all macros expanded</span>.<br>By convention, preprocessed files are given the file extension .i for C programs and .ii for C++ programs.<br>In practice, the preprocessed file is not saved to disk unless the <code>-save-temps</code> option is used.</p>



<p>Two of the most common directives, and the first that we will use, are <strong>#include</strong> and <strong>#define</strong>.</p>



<p><strong><span style="text-decoration: underline;">The #include Directive</span></strong><br><br><span style="text-decoration: underline;">When the preprocessor encounters the #include directive, it opens the <strong>header file</strong> and adds its contents into the temporary file</span>.<br>The symbols surrounding the name of the header file are important and determine where the preprocessor looks for the file.</p>



<p><code>#include &lt;name&gt;</code><br>The angle brackets denote a system include file that is part of the compiler itself (think of it as &#8220;library&#8221; code)<br>and directs the preprocessor to search for the file where the system header files are located (which varies from one compiler to another and from one Operating System to another).<br><br><code>#include "name.h"</code><br>The double quotation marks identify a header file that is written as a part of a program.<br>The quotation marks instruct the preprocessor to look for the header file in the current directory (i.e., in the same directory as the source code).<br>Header files that a programmer writes as part of an application program typically end with a .h extension.<br><br>You might see two kinds of system header files in a C++ program :<br>Older system header files end with a &#8220;.h&#8221; extension: &lt;name.h&gt;.<br>These header files were originally created for C programs, but may also be used with C++.<br>Newer system header files do not end with an extension: &lt;name&gt;, may only be used with C++.<br><br>File names appearing between &lt; and &gt; refer to system header files<br>File names appearing between an opening and closing &#8221; refer to header files written by the programmer as a part of the program.<br><br><span style="text-decoration: underline;">Note</span>:<br>The include directive does not end with a semicolon and there must be at least one space between the directive and the file name.</p>



<p></p>



<p><strong><span style="text-decoration: underline;">The #define Directive and Symbolic Constants</span></strong></p>



<p>The #define directive introduces a programming construct called a <strong>macro</strong>.<br>A simple macro only replaces one string of characters with another string.<br><br><span style="text-decoration: underline;">The #define directive is one (old) way of creating a symbolic constant</span> (also known as a named or manifest constant).<br><span style="text-decoration: underline;">The <strong>const</strong> and <strong>enum</strong> keywords are newer techniques for creating constants</span>.<br>It is a well-accepted naming practice to write the names of symbolic constants with all upper-case characters (this provides a visual clue that the name represents a constant).<br><br><span style="text-decoration: underline;">Note</span>:<br>The define directive does not end with a semicolon and there must be at least one space between the directive and the identifier, and between the identifier and the defined value; the defined value (the third part of the directive) is optional.</p>



<pre class="wp-block-code"><code>Stop after the Preprocessing stage. 
<span style="text-decoration: underline;">The output is in the form of preprocessed source code, which is sent to the standard output</span>.
Input files that don't require preprocessing are ignored.

$ gcc -E &lt;program_file_1&gt;.c &lt;program_file_2&gt;.c ... &lt;program_file_n&gt;.c

$ g++ -E &lt;program_file_1&gt;.cpp &lt;program_file_2&gt;.cpp ... &lt;program_file_n&gt;.cpp</code></pre>



<h2>Compiler</h2>



<p><span style="text-decoration: underline;"><strong>The Compiler translates source code into assembly code</strong></span><strong><span style="text-decoration: underline;"> for a specific processor</span></strong>.<br><br>As the Preprocessor processes each source code file one at a time and produces a single temporary file (for each source code file).<br>Similarly, the Compiler processes each temporary file one at a time and produces one assembly code file for each temporary file.<br><br><span style="text-decoration: underline;">The Compiler also detects syntax errors and provides the diagnostic output programmers use to find and correct those errors.<br></span>Despite all that the compiler does, its operation is transparent to programmers for the most part.</p>



<pre class="wp-block-code"><code>Stop after the stage of Compilation, do not Assemble. 
<span style="text-decoration: underline;">The output is in the form of an assembler code file</span> for each non-assembler input file specified.
By default, the assembler file name for a source file is made by replacing the suffix .c, .cpp, .i, .ii, etc., with .s
Input files that don't require compilation are ignored.

$ gcc -S &lt;program_file_1&gt;.c &lt;program_file_2&gt;.c ... &lt;program_file_n&gt;.c

$ g++ -S &lt;program_file_1&gt;.cpp &lt;program_file_2&gt;.cpp ... &lt;program_file_n&gt;.cpp</code></pre>



<h2>Assembler</h2>



<p><strong><span style="text-decoration: underline;">The Assembler translates assembly code into machine code the processor understands and can execute</span></strong>.<br><br><span style="text-decoration: underline;">The purpose of the Assembler is to convert assembly language into <strong>machine code</strong> and generate an <strong>object file</strong></span>. <br><br>When there are calls to external functions in the assembly source file, the Assembler leaves the addresses of the external functions undefined, to be filled in later by the linker.</p>



<pre class="wp-block-code"><code>Compile AND Assemble the source files, but do not Link.
<span style="text-decoration: underline;">The output is in the form of an object file for each source file</span>.
By default, the object file name for a source file is made by replacing the suffix .c, .cpp, .i, .ii, .s, etc., with .o
Unrecognized input files, not requiring compilation or assembly, are ignored.

$ gcc -c &lt;program_file_1&gt;.c &lt;program_file_2&gt;.c ... &lt;program_file_n&gt;.c

$ g++ -c &lt;program_file_1&gt;.cpp &lt;program_file_2&gt;.cpp ... &lt;program_file_n&gt;.cpp</code></pre>



<h2>Linker</h2>



<p>The final stage of compilation is the <strong><span style="text-decoration: underline;">linking of object files to create an executable program</span></strong>.<br><br>Object files contain machine code and information that the Linker uses to complete its tasks.<br>(Note that &#8220;object&#8221; in this context has nothing to do with the objects involved in Object-Oriented Programming)<br><br><strong>This is where all of the object files and any additional binary libraries are linked together to make the final program.</strong></p>



<p><span style="text-decoration: underline;"><strong>It takes each object files created by the Assembler and links them together, along with system and runtime libraries, to form a complete, executable program</strong></span>.<br><br><strong>An executable requires many external functions from system and runtime libraries</strong>.<br><span style="text-decoration: underline;"><strong>They contain functions that are necessary to run a program on a given architecture</strong></span><br>(linux-vdso.so.n, libc.so.n, ld-linux-x86-64.so.n (amd64), ld-linux.so.n (i386), etc.)</p>



<p><span style="text-decoration: underline;">A library is a binary file (usually not directly executable) containing compiled functions/programming code that may be used/called by other programs/applications</span>.<br><br>As a convention, a library name starts with &#8216;lib&#8217;, and the extension determines the type of the library:<br><strong>.a</strong> stands for <strong>archive (static library)</strong><br><strong>.so</strong> stands for <strong>shared object</strong> <strong>(dynamic library)</strong><br><br><strong><span style="text-decoration: underline;">Static Linking</span></strong> :<br><strong>The linker adds all the libraries the program needs inside the final executable file</strong> (<strong>content is included</strong>).<br>Static linking may simplify the process of distributing a program to multiple similar environments, since <strong>it already has everything it needs to run</strong>. But any update to the libraries dependencies won&#8217;t be effective until you perform a whole compilation and linking process again.<br><br><strong><span style="text-decoration: underline;">Dynamic Linking</span></strong> :<br><strong>The linker only places a reference to the required libraries in the final program</strong> (<strong>content is not included</strong>).<br><span style="text-decoration: underline;"><strong>The actual linking happens when the program is executed (loaded at runtime</strong>)</span>.<br>You don&#8217;t need to recompile the program if any update occurs to the libraries dependencies, but they all <strong>need to be present/installed on the system for the program to work</strong>.<br></p>



<pre class="wp-block-code"><code><strong><span style="text-decoration: underline;">Libraries </span></strong><span style="text-decoration: underline;"><strong>(binaries) </strong></span><strong><span style="text-decoration: underline;">Location</span></strong>

<strong>GNU C Library: Shared libraries   (package: libc&lt;n&gt;)</strong>
Contains the standard libraries that are used by nearly all programs on the system.

<strong>GNU Standard C++ Library v3       (package: libstdc++&lt;n&gt;)</strong>
Contains an additional runtime library for C++ programs built with the GNU compiler. 

Symbolic link /lib -&gt; /usr/lib
On Debian 64-bits amd64 architecture:   /lib/x86_64-linux-gnu/
On Debian 32-bits i386 architecture:    /lib/i386-linux-gnu/

<strong><span style="text-decoration: underline;">List of paths that ld (the linker) will search for libraries</span></strong>
The directories are searched in the order in which they are specified
$ ld --verbose | grep SEARCH_DIR | sed 's/; /\n/g'</code></pre>



<p></p>



<p>The name of the executable file depends on the hosting Operating System:<br>On Linux, Unix, and macOS systems, the linker produces a file named &#8216;a.out&#8217; by default.<br>On a Windows computer, the linker produces a file whose name ends with a .exe extension.</p>



<p>Users may also specify a name that overrides the default.<br><br>For example, if you want gcc to generate an executable with a specific name, use the -o option followed with the desired name:</p>



<pre class="wp-block-code"><code>$ gcc -o &lt;program_name&gt; &lt;program_file_1&gt;.c &lt;program_file_2&gt;.c ... &lt;program_file_n&gt;.c

$ g++ -o &lt;program_name&gt; &lt;program_file_1&gt;.cpp &lt;program_file_2&gt;.cpp ... &lt;program_file_n&gt;.cpp</code></pre>



<p>When the compiling finishes, temporary/intermediate files are removed.</p>



<pre class="wp-block-code"><code>This command shows all shared library dependencies (what libraries the executable requires)

$ ldd &lt;program_name&gt;</code></pre>



<pre class="wp-block-code"><code>readelf displays information about ELF format object files. 
The options control what particular information to display.
This program performs a similar function to objdump but it goes into more detail

$ readelf -a &lt;program_name&gt;</code></pre>



<div style="height:100px" aria-hidden="true" class="wp-block-spacer"></div>


<div class="wp-block-image">
<figure class="aligncenter size-full is-resized"><a href="https://itec4b.com/wp-content/uploads/2023/01/g-compilation-process.png"><img decoding="async" src="https://itec4b.com/wp-content/uploads/2023/01/g-compilation-process.png" alt="" class="wp-image-841" width="757" height="459" srcset="https://itec4b.com/wp-content/uploads/2023/01/g-compilation-process.png 1009w, https://itec4b.com/wp-content/uploads/2023/01/g-compilation-process-300x182.png 300w, https://itec4b.com/wp-content/uploads/2023/01/g-compilation-process-768x466.png 768w" sizes="(max-width: 757px) 100vw, 757px" /></a><figcaption class="wp-element-caption"><strong>g++ Compiler Operations</strong></figcaption></figure></div>


<h2><strong>Loader</strong></h2>



<p>This stage happens when the program starts up.<br>The program is scanned for references to shared libraries.<br>Any references found are resolved and the libraries are mapped into the program.</p>



<pre class="wp-block-code"><code>The <strong>dynamic linker/loader programs</strong> <strong>ld.so</strong> (or<strong> ld.so.n</strong>) and <strong>ld-linux.so</strong> (or <strong>ld-linux.so.n</strong>) find and load the shared objects (shared libraries) needed/used by a program, prepare the program to run, and then run it.

In Debian:
$ ls -l /lib/$( arch )-linux-gnu/ld-linux*

$ &lt;loader_program&gt; &lt;program_name&gt;</code></pre>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Computer Programming</title>
		<link>https://itec4b.com/computer-programming/</link>
		
		<dc:creator><![CDATA[author]]></dc:creator>
		<pubDate>Sat, 28 Jan 2023 19:13:28 +0000</pubDate>
				<category><![CDATA[Computer]]></category>
		<category><![CDATA[Programming]]></category>
		<category><![CDATA[computer]]></category>
		<category><![CDATA[cpu]]></category>
		<category><![CDATA[programming]]></category>
		<guid isPermaLink="false">https://itec4b.com/?p=465</guid>

					<description><![CDATA[Computers can only understand binary language (sequences of instructions made of 1s and 0s) called machine code or machine language. To command a computer you need to speak its language.Not all the computers &#8220;speak the same way&#8221;, there are different technical implementations and representation of instructions. The instructions that a machine can understand is called &#8230; <p class="link-more"><a href="https://itec4b.com/computer-programming/" class="more-link">Read more<span class="screen-reader-text"> "Computer Programming"</span></a></p>]]></description>
										<content:encoded><![CDATA[
<p><strong><span style="text-decoration: underline;">Computers can only understand bi</span></strong><span style="text-decoration: underline;"><strong>nary language (sequences of instructions made of 1s and 0s)</strong></span> called <strong>machine code</strong> or <strong>machine language</strong>.</p>



<p>To command a computer you need to speak its language.<br>Not all the computers &#8220;speak the same way&#8221;, there are different technical implementations and representation of instructions.</p>



<p>The instructions that a machine can understand is called the <strong>instruction set</strong> (range of instructions that a CPU can execute).</p>



<p>The <strong>Central Processing Unit (CPU)</strong>, also called processor, is the electronic component that executes instructions.<br>It is one of the most important parts of any computer.<br>Every CPU has a set of built-in commands (the instruction set), these &#8220;basic&#8221; operations are hardwired into the CPU.<br>CPUs only understand those operations encoded in <strong>binary code</strong>, the low-level machine code language (native code).<br><span style="text-decoration: underline;">Instructions are sequentially mixed together to make what is known a <strong>program</strong></span>.</p>



<p>In computer science, an <strong>Instruction Set Architecture (ISA)</strong> is an abstract model of a computer.<br>A device that executes instructions described by an ISA, such as a CPU, is called an implementation.<br><br><span style="text-decoration: underline;">The only way you can interact with the hardware is through the instruction set of the processor</span>.<br>The ISA specifies what the processor is capable of doing.<br><br>It is basically the interface between the hardware and the software.<br>It defines the supported data types, the registers, how the hardware manages main memory, key features (such as the memory consistency, addressing modes, virtual memory), which instructions a microprocessor can execute, and the input/output model of a family of implementations of the ISA.</p>



<p>It can be viewed as a &#8220;programmer’s manual&#8221;, the technical description of how it works and what you can do with it.</p>



<p>Each operation to perform from an instruction set is identified by a binary code known as an <strong>opcode</strong> (Operation Code).<br>The opcode is the first part of an instruction (the first bits).<br>It&#8217;s a unique code that identifies a specific operation.<br><br>On traditional architectures, an instruction includes an opcode that specifies the operation to perform AND zero or more <strong>operand</strong> specifiers, which may be registers, memory addresses, or literal data the operation will use or manipulate.<br><br>In Very Long Instruction Word (VLIW) architectures, multiple simultaneous opcodes and operands are specified in a single instruction.<br><br><span style="text-decoration: underline;">The number of operands is one of the factors that may give an indication about the performance of the instruction set</span>.<br><br><span style="text-decoration: underline;">A <strong>word</strong> is the fixed-sized piece of data handled as a unit by the processor</span>.</p>



<p><span style="text-decoration: underline;">The number of bits in a word (word size) is an important characteristic of any specific processor design or computer architecture</span>, it implies how many operations the computer is capable in a single word.</p>



<h2>Computer Architecture</h2>



<p>The <strong><span style="text-decoration: underline;">von Neumann architecture</span></strong> is a computer architecture based on a 1945 description by John von Neumann, and by others, in the <strong><span style="text-decoration: underline;">First Draft of a Report on the EDVAC</span></strong> (Electronic Discrete Variable Automatic Computer) <strong>one of the earliest electronic computers</strong>.</p>



<p>The report is an incomplete 101-page document written by hand by John von Neumann.<br><span style="text-decoration: underline;"><strong>It contains the first published description of the logical design of a computer using the stored-program concept</strong>, which has controversially come to be known as the von Neumann architecture</span>.</p>



<p>The document describes a <span style="text-decoration: underline;"><strong>design architecture for an electronic digital computer</strong></span> with these components:<br>&#8211; A Processing Unit with both an Arithmetic Logic Unit and processor registers<br>&#8211; A Control Unit that includes an Instruction Register and a Program Counter<br>&#8211; Memory that stores data and instructions<br>&#8211; External mass storage<br>&#8211; Input and output mechanisms<br><br>The von Neumann architecture is not perfect, an instruction fetch and a data operation cannot occur at the same time since they share a common bus. This is referred to as the von Neumann bottleneck, which limits the performance of the corresponding system.<br><br><span style="text-decoration: underline;">A stored-program digital computer keeps both <strong>program instructions and data</strong> in read–write, <strong>random-access memory (RAM)</strong></span></p>



<p>The vast majority of modern computers use the same memory for both data and program instructions, but have <span style="text-decoration: underline;"><strong>caches</strong> between the CPU and memory</span>, and, for the caches closest to the CPU, have separate caches for instructions and data, so that most instruction and data fetches use separate buses (split cache architecture)<br><br>If based on the von Neumann architecture, processors contain at least a <strong>Control Unit</strong> (CU), an <strong>Arithmetic Logic Unit</strong> (ALU), and <strong>processor registers</strong>.</p>



<p>Every modern processor includes very small super-fast memory banks, called registers.<br><strong><span style="text-decoration: underline;">The registers are the fastest accessible memory location for the CPU and sit on the top of the memory hierarchy</span></strong>.<br>They can be read and written at high speed since they are internal to the CPU.<br>They are much smaller in size than local memory (size of a word: usually 64 or 32 bits) and are used to store machine instructions, memory addresses, and certain other values.<br><br>Data is loaded from the main memory to the registers (via the CPU cache) after which it undergoes various arithmetic operations.<br><br>The manipulated data is then written back to the memory via the CPU cache.<br><br><span style="text-decoration: underline;"><strong>CPU&#8217;s cache memory</strong> is dedicated to hold (inside or close to the CPU) the most commonly used memory words, in order to avoid slower accesses to main memory (RAM)</span>.<br><br><span style="text-decoration: underline;">Most CPUs have a hierarchy of multiple cache levels, with specific instruction and data caches at Level 1</span>.<br>The L1 cache or first-level cache is the closest to the CPU, making it the type of cache with the highest speed and lowest latency of the entire cache hierarchy.<br><br>Instruction cache: used to speed up executable instruction fetch<br>Data cache: used to speed up data fetch and store</p>



<h2>Instruction Cycle</h2>



<p>A program is a sequence of instructions in memory.<br><br><span style="text-decoration: underline;">The CPU executes operations through a cycle known as &#8220;Fetch, Decode, and Execute&#8221;</span>.<br><br>The most important registers (Control Unit) are :<br>&#8211; <strong>Program Counter (PC)</strong>, which points (holds the memory address) to the next instruction to be fetched for execution<br>&#8211; <strong>Instruction Register (IR)</strong>, which holds the instruction currently being executed<br></p>



<pre class="wp-block-code"><code>1. Fetch the instruction from memory into the Instruction Register
2. Change the Program Counter register to point to the next instruction
3. Decode the instruction
      Determine the type of instruction (opcode)
      If the instruction operand is a word in memory: 
         Determine where it is located (memory address)
         Retrieve the data from memory into a register
4. Execute the instruction (ALU)
5. Go to step 1 to begin executing the next instruction</code></pre>



<p>The operation code tells the ALU what operation to perform, the operands are used in the operation</p>



<h2>Technology Evolution</h2>



<p>Since the invention of the <strong>transistor</strong> (electronic switch) in 1947 by John Bardeen, Walter Brattain, William Shockley<br>AND the <strong>Silicon Integrated Circuit</strong> in 1958 by Jack Kilby and Robert Noyce<br><span style="text-decoration: underline;">The computer industry development has never stopped</span>, advances in technology has revolutionized computers, leading to smaller, faster, better products at lower prices.</p>



<p>Manufacturers have packed more and more transistors per chip every year, meaning larger memories and more powerful processors.</p>



<p><span style="text-decoration: underline;"><strong>Latest processors contains billions of transistors</strong></span>.</p>



<p>Moore&#8217;s law is the observation that the number of transistors in an Integrated Circuit doubles about every two years.<br>It is an observation and projection of a historical trend since 1965.</p>



<p>While Moore’s law will probably continue to be proven for some years, it has a limit:<br>First, you cannot shrink a transistor size more than you can.<br>Second, you have <span style="text-decoration: underline;"><strong>problems of power consumption and heat dissipation</strong></span>.</p>



<p>Smaller transistors make it possible to run at higher clock frequencies, but also requires using a higher voltage.<br>That is, going faster (clock speed) means having more heat to get rid of.</p>



<p>The solution is the <strong>multi-core processor architecture</strong>: two identical CPUs on a chip consume less power than one CPU at twice the speed.<br><span style="text-decoration: underline;">That is one of the reasons why processors have more and more cores and larger caches rather than higher clock speeds</span>.</p>



<p><span style="text-decoration: underline;">Taking advantage of these multiprocessors poses great challenges to programmers, it requires knowledge to explicitly control/manage parallel execution</span>.</p>



<h2>CPU Core</h2>



<p>Before multi-core processor architecture, computers only had one CPU: the processor could only perform one instruction at a time.<br><span style="text-decoration: underline;">A CPU core is a physical hardware processor with all the architecture that comes with it</span>.<br><span style="text-decoration: underline;">We now have multiple processors grouped inside one Integrated Circuit (single chip), running independently: This is real <strong>hardware parallelism</strong> (as long as the Operating System uses it)</span>.<br>The design is far more advanced, it requires a different architecture to orchestrate all this (controllers, buses, memory access, etc.). <br><br><span style="text-decoration: underline;">This technology has allowed <strong>Machine Virtualization</strong> (standard practice in enterprise IT architecture), which is the foundation of <strong>Cloud Computing</strong></span>.<br><br><span style="text-decoration: underline;">It allows the hardware elements of a single computer (processors, memory, storage, and more) to be divided into multiple virtual computers, commonly called <strong>Virtual Machines</strong> (<strong>VM</strong>)</span>. Each VM runs its own Operating System (OS) and behaves like an independent computer, even though it is running on just a portion of the actual underlying computer hardware.<br><br><span style="text-decoration: underline;">The more cores there are in a CPU, the more efficient it is and the more you can do</span>.</p>



<h2>CPU Thread</h2>



<p><strong>Simultaneous MultiThreading</strong> (SMT) is a technique for improving the overall efficiency of CPUs with hardware multithreading.<br>SMT allows to better use the resources provided by modern processor architectures.<br><br>When SMT is operational, the Operating System sees the processor as having &#8220;double the cores&#8221; (<strong>Logical Processors</strong>).<br>Two logical cores can work through tasks more efficiently than a native single-threaded core, by taking advantage of idle time when the core would formerly be waiting for other tasks to complete.<br>It improves CPU throughput (usage optimization).</p>



<h2>CPU Clock Speed</h2>



<p>The clock speed measures the number of cycles your CPU executes per second, measured in GHz (gigahertz).<br>A cycle is the basic unit that measures a CPU’s speed.<br>During each cycle, billions of transistors within the processor open and close.<br><br>A CPU with a clock speed of 3.4 GHz executes 3.4 billion cycles per second. (Older CPUs had speeds measured in MegaHertz, or millions of cycles per second)<br><br>Sometimes, multiple instructions are completed in a single clock cycle. <br>In other cases, one instruction might be handled over multiple clock cycles.</p>



<h2>How Do We Communicate With The Processor ?</h2>



<p>Unless you are a supernatural alien coming from another galaxy, we use <strong><span style="text-decoration: underline;">programming languages</span></strong><br>(created by skillful and talented people)<br><br>Programming languages are often categorized as <strong>low-level</strong>, <strong>mid-level</strong> or <strong>high-level</strong>  depending on <br>&#8220;how close you are from the hardware&#8221;.</p>



<h2>Low-Level Programming Languages</h2>



<p><span style="text-decoration: underline;">low-level programming languages are <strong>hardware-dependent</strong> and <strong>machine-centered</strong> (tied to the hardware, providing operations matching the hardware&#8217;s capabilities)</span>.<br><br>low-level programs execute faster than high-level programs, with a small memory footprint.</p>



<p><strong>Assembly</strong> language (asm), is any low-level programming language with a <span style="text-decoration: underline;">very strong correspondence between the instructions in the language and the processor&#8217;s<strong> </strong>instruction set</span>.<br><br>Assembly is very close to machine code but is &#8220;more readable&#8221; and uses mnemonics.<br>You need to have a strong technical knowledge to use it (interaction with the hardware), Assembly is not easy.<br><br>The statements are made up of opcodes and operands (processor registers, memory addresses, etc.), which are translated into machine code (instructions that the processor understands).<br><br>One line of assembly equals one line of machine code.<br><br>Assembly code is <span style="text-decoration: underline;">converted into executable machine code by a utility program referred to as an <strong>assembler</strong></span>.<br><br><span style="text-decoration: underline;">Each assembly language is specific to a particular computer architecture, it is not portable to a different type of architecture</span>.</p>



<h2>Mid-Level, High-Level Programming Languages</h2>



<p><span style="text-decoration: underline;">Most programming is done using high-level <strong>compiled</strong> or <strong>interpreted</strong> languages, which are easier for humans to understand, write, debug and do not require knowledge of the the system (hardware) running the program</span>.</p>



<p><span style="text-decoration: underline;">These languages need to be compiled (translated into system-specific machine code) by a <strong>compiler</strong>, or run through other system-specific compiled programs</span>.<br><br>High-level programming languages are generally <strong>hardware-independent</strong> and <strong>problem-centered</strong> (providing operations supporting general problem-solving).<br><span style="text-decoration: underline;">Programmers can move hardware-independent code from one computer to another fairly easily</span>.</p>



<p><span style="text-decoration: underline;">Delroy A. Brinkerhoff, Ph.D</span> :</p>



<p>The <strong>C</strong> programming language is deemed a mid-level language because it allows programmers more access to the hardware than other higher-level languages.<br><br>We can locate <strong>C++</strong> at two different places in this spectrum.<br><br>First, it represents a mid-level language because it retains C&#8217;s access to the hardware.<br>But second, it also represents a high-level language because it supports <strong>object-orientation</strong>, a problem-centered approach to programming.<br><br>The combination of high- and mid-level features makes C++ a popular choice for writing Operating Systems, games and large industrial applications.</p>



<p><span style="text-decoration: underline;">Computers can&#8217;t directly execute programs written in high-level languages</span>,<br>so there must be some way of translating a program written in a high-level language into machine language.</p>



<p>Two kinds of computer programs perform the necessary translation: <span style="text-decoration: underline;">compilers and interpreters</span>.</p>



<p><span style="text-decoration: underline;">A compiler is a program that translates other programs written in a high-level programming language like C or C++ into machine code or machine language</span>.</p>



<p>Some languages such as <strong>Java</strong> and <strong>C#</strong> take a different route.<br>Compilers for these languages translate the <span style="text-decoration: underline;">high-level source code into an intermediate form</span> (a representation that lies somewhere between the high-level and true machine code) called <strong>virtual machine code</strong>.<br><br><span style="text-decoration: underline;">The virtual machine code then becomes the input to another program called an interpreter or Virtual Machine (VM), a program that simulates a hardware CPU</span>. <span style="text-decoration: underline;">Note here that VM is a software component dedicated to run <span style="text-decoration: underline;">virtual machine code</span> (runtime environment for applications), it is different from Hardware Virtualization</span>.</p>



<p>Other languages, such as <strong>Javascript</strong> and <strong>Perl</strong>, are <span style="text-decoration: underline;"><strong>completely interpreted</strong></span>.<br><span style="text-decoration: underline;">These languages don&#8217;t use compilers at all</span>.<br>The interpreter reads the source code, written in the high-level language, and <span style="text-decoration: underline;">interprets the instructions one at a time</span>.<br>That is, the interpreter itself carries out each instruction in the program.</p>



<p></p>



<p><strong><span style="text-decoration: underline;">Compiling and running a program written in a language that produces machine code</span></strong><br>The compiler reads the C/C++ source code from a file that ends with .c or .cpp and produces a machine code file that is executable.<br>See <a href="https://itec4b.com/c-compiler-operations">C/C++ Compiler Operations</a><br></p>



<p><strong><span style="text-decoration: underline;">Compiling and running a program written in a language that produces virtual machine code</span></strong><br>Languages like Java and C# are hybrid languages because they use both a compiler and a Virtual Machine.<br><span style="text-decoration: underline;">They first compile the source code to <strong>virtual machine code</strong></span>, that is, to machine code for a virtual computer (a computer that doesn&#8217;t exist but is simulated by another computer).<br>After compiling the source code, a <strong><span style="text-decoration: underline;">Virtual Machine (VM)</span></strong><span style="text-decoration: underline;"><strong> executes the code</strong></span> by simulating the actions of a real computer.<br>The Operating System loads the VM into main memory and runs it.<br>It is the VM that reads and runs the virtual machine code.</p>



<p><strong><span style="text-decoration: underline;">Running a program written in a purely interpreted language</span></strong><br>Languages like Javascript and Perl <span style="text-decoration: underline;">do not compile the source code at all</span>.<br>Like the hybrid languages (Java and C#), the Operating System run the interpreter or VM.<br><span style="text-decoration: underline;">The interpreter reads the source code file and <strong>executes the program one statement at a time</strong> without translating the whole program to any other language</span>.<br>Web browsers incorporate interpreters for some languages (like Javascript) while the Operating System runs the interpreters for other languages (like Perl) as application programs.</p>



<h2>High-Level Programming Languages Advantages and Disadvantages</h2>



<p>Each approach to running a program written in a high-level programming language has advantages and disadvantages.<br><br><span style="text-decoration: underline;"><strong>Programs written in fully compiled languages (e.g., C and C++) execute faster than programs written in partially compiled languages (e.g., Java and C#) and run much faster than programs written in fully interpreted languages (e.g., Javascript and Perl)</strong></span>.<br><br>To give some idea of the difference in performance, let&#8217;s say that a C++ program, once compiled, executes in time 1.<br>A program in a hybrid language (compiled and interpreted) will generally run in time 3 to 10.<br>In a purely interpreted language, the same program runs in a time of about 100.</p>



<p>Contemporary versions of the Java and C# VMs use a Just In Time (JIT) interpreter that compiles some of the virtual code to machine code while processing it.<br>JIT processors reduce run time to about 1.5 times that of purely compiled language systems.</p>



<p>&#8220;How does Java compare in terms of speed to C or C++ or C# or Python? <span style="text-decoration: underline;">The answer depends greatly on the type of application you&#8217;re running</span>. No benchmark is perfect, but <a href="https://benchmarksgame-team.pages.debian.net/benchmarksgame">The Computer Language Benchmarks Game</a> is a good starting point.&#8221;<br><br><span style="text-decoration: underline;">On the other hand, once we compile a program written in purely compiled languages, we can&#8217;t easily move the resulting executable machine code to a different platform (e.g., you can&#8217;t run a Windows program to an Apple computer)</span>.<br><br><span style="text-decoration: underline;">In contrast, we can easily move programs we write in interpreted languages between different computers</span>.</p>



<p>Interpreted programs are portable because they run on a VM or interpreter.<br>From the hardware and Operating System&#8217;s perspective, the interpreter is the running program.<br><span style="text-decoration: underline;">Interpreters and VMs are written in purely compiled languages, so they are not portable, but the programs that they run are</span>.<br>Once we install the interpreter on a system, we can move interpretable programs to the system and run them without further processing.<br><br><span style="text-decoration: underline;"><strong>Execution speed is not the only criteria to take into consideration, there is also the speed/ease of development</strong></span>.</p>



<p><a href="https://wiki.python.org/moin/PythonSpeed">here is an article about Python speed</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
