1 An Intro to Computing
In the early 9th century AD, the Persian scholar Muhammad ibn Musa al-Khwarizmi authored a book that significantly influenced the development of European mathematics. Titled “Al-Kitāb al-mukhtaṣar fī ḥisāb al-jabr wa’l-muqābala” (“The Compendious Book on Calculation by Completion and Balancing”), Al-Khwarizmi introduced a collection of rules for solving linear and quadratic equations based on intuitive geometric arguments to the Western world. This work was translated into Latin in the 12th century, leading to the term algebra (the Latin title of the book is “Liber Algebræ et Almucabola”). Although algebra had earlier roots in Greece, it was Al-Khwarizmi’s book that made it accessible to European mathematicians, and it also laid the foundations for algorithms.
An algorithm is a finite, well-defined sequence of computational instructions or steps designed to perform a specific task or solve a particular problem. Each step in an algorithm is precise and unambiguous, and the algorithm is expected to produce a desired output within a finite amount of time. Algorithms are fundamental to computer science and mathematics, providing a systematic method for problem-solving and decision-making.
Let’s consider making a pot of coffee as an example of an algorithm. Here’s a simple, step-by-step process:
- Fill the Coffee Maker with Water: Measure and pour the required amount of water into the coffee maker’s reservoir.
- Place the Coffee Filter: Open the filter compartment and place a coffee filter inside.
- Add Coffee Grounds: Measure the appropriate amount of coffee grounds and pour them into the filter.
- Close the Filter Compartment: Securely close the filter compartment.
- Turn On the Coffee Maker: Press the power button to start the brewing process.
- Wait for the Coffee to Brew: Allow the coffee maker to complete its brewing cycle.
- Serve the Coffee: Once the brewing is finished, pour the freshly brewed coffee into a cup and enjoy.
Each step in this process must be followed in the correct order to successfully make a pot of coffee, illustrating the fundamental nature of algorithms in organizing and executing tasks.
Algorithms are like recipes for computing. They provide instructions on how to solve problems, find answers, or move from point A to point B using only the resources available in your computer’s memory.
1.1 Algorithms on Computers
One way to illustrate the concept of a recipe in a mechanical process is by designing machines specifically intended to solve certain problems. The earliest computers were fixed-program computers, meaning they were designed to solve a specific mathematical problem. Some simple computers still use this approach. For example, a four-function calculator can perform basic arithmetic but is not designed for word processing or gaming. Users cannot change their programs without replacing internal components.
The first modern computer to run a program was the Manchester Mark 1 in 1949. Unlike its predecessors, it was a stored-program computer. These machines store sequences of instructions that can be executed by an interpreter. This interpreter can execute any legal set of instructions, enabling it to compute anything that can be described using those instructions. The program and the data it manipulates reside in memory.
Typically, a program counter points to a specific location in memory, and computation starts by executing the instruction at that point. Most often, the interpreter simply proceeds to the next instruction in the sequence. However, in some cases, it performs a test, and based on that test, execution may jump to another point in the sequence of instructions.
1.2 Programming Languages
To use execute algorithms on a computer, we need a programming language to describe the sequence of instructions. The British mathematician Alan Turing devised a theoretical device in 1936 known as the Universal Turing Machine. This hypothetical computer had unlimited memory, represented by a tape on which one could write zeroes and ones. It also consisted of simple instructions for moving, reading, and writing on the tape.
The Church-Turing thesis posits that if a function is computable, a Turing Machine can be programmed to compute it. The “if” in the Church-Turing thesis is crucial because not all problems have computational solutions.
A function or problem is considered computable if there exists a finite sequence of well-defined steps, an algorithm, that can be implemented by a computational model (such as a Turing Machine) to produce the correct output for any valid input within a finite amount of time. In other words, a problem is computable if there is a systematic method to solve it using an algorithm.
The Church-Turing thesis leads directly to the concept of Turing completeness. A programming language is said to be Turing complete if it can simulate a universal Turing Machine. All modern programming languages are Turing complete, meaning that anything that can be programmed in one language (e.g., Python) can also be programmed in another language (e.g., Java). While certain tasks may be easier to program in specific languages, all languages are fundamentally equal in terms of computational power.
1.3 Types of Programming Languages
1.3.1 Low-Level vs. High-Level Languages
Low-level languages are closer to machine language, providing little to no abstraction from a computer’s hardware. They allow direct control over the hardware and memory, making them highly efficient but also more complex to write and understand. Examples of low-level languages include Assembly language and machine code.
Low-level languages operate with minimal abstraction which results in high-performance programs that execute faster and are more efficient in resource usage. However, this comes with the trade-off of complex and less intuitive code, requiring a deep understanding of computer architecture. Additionally, low-level programs are often platform-specific, meaning they are tailored to a particular type of processor or computer architecture.
Example: Assembly Language
MOV AX, 1 ; Move the value 1 into the AX register
ADD AX, 2 ; Add the value 2 to the AX register
MOV BX, AX ; Move the result from AX to BX
High-level languages abstract away the details of computer hardware, allowing programmers to focus on the logic of the problem rather than the intricacies of the machine. They feature more intuitive syntax and semantics, making them easier to learn and use. Programs written in high-level languages are generally portable across different platforms with minimal modification. These languages also come with extensive libraries and frameworks that simplify complex tasks such as GUI development, web programming, and data manipulation. Although this abstraction can result in slower execution compared to low-level languages, the trade-off is often worth it for the ease of development and maintenance.
Example: Print a greeting to the screen in Python
print("Hello, World!")
Low-level languages offer greater control over hardware and performance optimization, whereas high-level languages prioritize ease of use and development speed. This means that while low-level languages generally result in faster and more efficient programs, they come at the cost of greater complexity. In contrast, high-level languages, although potentially slower, provide significant productivity benefits. Their readability and maintainability make them suitable for large-scale applications and collaborative projects. Additionally, high-level languages reduce development time due to their simplicity and the availability of powerful libraries and frameworks.
Understanding the trade-offs between low-level and high-level languages is crucial for selecting the right tool for a given task. For most applications, high-level languages provide sufficient performance while dramatically simplifying development. However, for performance-critical applications, such as operating system kernels or embedded systems, low-level languages remain indispensable.
1.3.2 General-Purpose vs. Domain-Specific Languages
General-purpose programming languages are highly versatile, allowing them to be used for a wide array of programming tasks, including web development, data analysis, game development, and more. They come with extensive standard libraries and frameworks that support diverse functionalities, making it easier to implement complex features. Programs written in these languages are often portable across different operating systems and platforms. Additionally, these languages usually have large communities, extensive documentation, and abundant resources for learning and troubleshooting. Examples of well-known general-purpose programming languages include Python, Java, C++, and JavaScript.
Domain-specific Domain-specific languages (DSLs) are designed to handle specific types of tasks, offering features and abstractions directly relevant to their domain. They often allow for more concise and efficient coding within their domain, reducing the complexity and effort required to develop certain applications. However, their functionality is typically narrow, focusing on particular problem areas and lacking the versatility of general-purpose languages. Examples of well-known domain-specific languages include SQL, HTML, and MATLAB.
General-purpose languages are suitable for a wide variety of applications, providing greater flexibility and usability across many contexts. In contrast, domain-specific languages are specialized for particular tasks, making them more efficient and easier to use within their specific domains. While general-purpose languages often require learning a broader set of concepts and syntax, domain-specific languages might have a steeper but narrower learning curve focused on their particular use case.
1.3.3 Interpreted vs. Compiled Languages
Interpreted languages execute code directly through an interpreter, which translates high-level instructions into machine code line by line. This immediate execution provides several advantages, such as ease of debugging, as interpreted languages often generate more informative error messages, making it simpler to identify and correct issues by pointing to the exact line where an error occurred. Additionally, interpreted languages offer platform independence, allowing the same source code to run on any platform with a compatible interpreter, enhancing portability. However, interpreted programs generally run slower than compiled programs because the translation occurs at runtime. Well-known interpreted languages include Python, JavaScript, Ruby, and PHP.
Compiled languages require the source code to be translated into machine code by a compiler before execution, producing an executable file. This pre-execution compilation results in faster execution, as the translation is completed beforehand, allowing the executable file to be directly executed by the hardware. Compilers also perform extensive checks during compilation, catching many types of errors before the program runs. However, compiled executables are specific to the target platform’s architecture, making them less portable unless recompiled for different platforms. Well-known compiled languages include C, C++, Rust, and Go.
Example of C Code:
#include <stdio.h>
int main() {
("Hello, World!\n");
printfreturn 0;
}
This C code must be compiled into an executable file before it can be run.
Compiled languages generally produce faster-running programs since the code is translated into machine language ahead of time, whereas interpreted languages tend to be slower due to on-the-fly translation. However, interpreted languages offer a quicker development cycle, allowing code to be written and tested immediately without a separate compilation step, which is advantageous for rapid prototyping and iterative development. Interpreted languages also boast better portability, as the same code can run on any platform with the appropriate interpreter, while compiled languages require recompilation for different platforms, adding complexity.
Debugging is typically easier with interpreted languages because they provide more immediate and informative error feedback, pointing directly to issues in the source code. Although compiled languages catch many errors at compile time, their runtime error messages can be less detailed. Additionally, interpreted languages can support more dynamic programming features, such as dynamic typing and runtime evaluation of code, which can be more challenging to implement efficiently in compiled languages.
1.4 Intro to Python
In this course, we will be using Python. Python is a general-purpose programming language that can be used to build almost any kind of program that does not require direct access to the computer’s hardware. However, Python is not optimal for programs with high reliability constraints or those that are built and maintained by many people over a long period.
Python has several advantages over many other languages. It is relatively simple and easy to learn. Because Python is designed to be interpreted, it provides runtime feedback that is especially helpful for novice programmers.
Python was first developed by Guido van Rossum in 1990. It went largely unnoticed during its first decade but gained popularity around 2000 with the release of Python 2.0. Python 3.0, released at the end of 2008, cleaned up many inconsistencies in Python 2’s design. While Python 3.0 is not backward-compatible with earlier versions, most important public domain libraries have been ported over, eliminating the need to use outdated software.