Monthly Archives: August 2013

Think Python Solutions on Github

I’m quite surprised how much interest there has been in my solutions of the Coding Bat exercises. Therefore, I’ve decided to publish my solutions of the end-of-chapter exercises from Allen Downey’s Think Python: How to Think Like a Computer Scientist as well. You can find them on my github page.

My solutions are almost complete. I only skimmed the chapters on turtle graphics, GUI (Tkinter), and most of object-oriented programming.

Allen Downey himself provides solutions to many of the exercises. You’ll find solutions to all exercises he doesn’t provide a solution to on my github page. The others aren’t duplicates, though. I have had a look at a few of Allen Downey’s solutions, and some were a bit different from mine. So, it may be worth checking out both if you decide to go through Think Python. You’ll probably learn something from both Allen Downey’s and my solutions.

Allen Downey’s Think Python: How to Think Like a Computer Scientist

Many textbooks are bloated, poorly structured, and badly written. Most seem to be quite useless without an accompanying college course, but if the course is well-taught, you can often skip the textbook altogether. Programming is no exception to this rule. As Python got more popular, the publishing industry started churning out one tome after another, and from what I’ve seen, they are often dreadful.

For a particularly bad example, look at Mark Lutz’s Learning Python, now in its 5th edition. It’s a staggering 1600 pages thick, and full of downright absurd examples that seem to consist of little more than manipulation of strings like “spam” and “egg”, and if you’re lucky, he will throw in the integer “42” as well. Mark Lutz’ book on Python is quite possibly the worst technical book I have ever encountered, but the other books I’ve sampled were not much better.

One thing they all seem to have in common is their inflated page count. I think this is simply a tactic of publishers to justify selling their books for a higher price. Adding another 500 pages costs very little with a decent print run. Yet, all that dead weight allows them to increase their retail price by 100 %. Apparently consumers have been misled to believe that a higher page count means that you’ll get a better bang for your buck, but the opposite is true.

On the other hand, Think Python: How to Think Like a Computer Scientist, in version 2.0.10, hardly exceeds 200 pages. Yet, Allen Downey manages to cover all basic programming constructs, recursion, data structures, and much more. He even details helpful debugging tips, and added a useful glossary for quick lookup of terms you may be unfamiliar with. File I/O got its own chapter. That’s not all. Towards the end of the book, he invites you to explore a GUI toolkit (Tkinter), object-oriented programming, and explains the basics of the analysis of algorithms. The amount of content in this book is quite staggering, especially when compared to its peers. Downey managed to organize his material very well, which resulted in a book that is slim, yet still feels complete.

What I particularly liked about Think Python is that the material is presented in a clear, logical order. Consequently object-orientation shows up very late. In fact, it is introduced as an optional feature. This is how it should be done, if you want to include OOP. For a beginner, “modern” OOP adds unnecessary complexity and makes it harder to form a clear mental model of the problem you are going to solve. On the other hand, competent experienced programmers tend to be highly critical of OOP. You’re probably better off if you never encounter it. That’s not (yet?) a mainstream opinion, though, so you may have to learn OOP eventually.

In a book like Liang’s Introduction to Programming Using Python, you get dozens of exercises at the end of each chapter. Most are a drudgery, focussing on minute details of the language. Often, they are just mere variations of one another, not unlike the mechanical “drills” that are popular in high school mathematics education. But this isn’t even the worst part of it. Nowadays, you can’t just pick up an old edition of a textbook and get basically the same product plus a few typos here and there. No, instead there are changes all over the place, not all of them particularly well-thought out, and the exercises get modified as well. Of course, compulsive updating and rushing a book to release makes it easy for new errors to find their way into the book, as evinced by long lists of errata of every new edition. On a side note, Liang’s books are particularly aggravating since you don’t even get the full book anymore. Instead, a good chunk of the content consists as “bonus chapters” that have to be downloaded by using a code that was printed in the book, presumably as an attempt to make buying used books less attractive.

Compared to that despicable strategy, you can get Downey’s book not only free of charge, but complete too. His exercise are often surprisingly engaging for a beginner’s text. Writing a simple recursive function that checks whether a word is an anagram is not very exciting. On the other hand, processing a text file that contains over 100,000 words, and finding the five longest anagrams in the English language is more involved, and to successfully solve that exercise, you have to draw from previous chapters. This makes Think Python particularly interesting for autodidacts since you can effectively use the exercises to check whether you have gained a firm grasp of the material. There is a reasonable number of exercises in the book, and they are well-chosen. It’s common that they systematically build upon each other. This should be normal for textbooks, but it’s an exception.

Think Python: How to Think Like a Computer Scientist can be freely downloaded at the homepage of the author. The book has been released under a free license. As a consequence, there are editions for different programming languages. Think Python itself is based on an earlier book that covered Java. Being freely available also led to a large audience for this book. Downey acknowledges dozens of people who have made suggestions or pointed out errors in the text. After about a decade, this now leads to Think Python being a very polished book. Of all the introductory Python textbooks I’ve had a look at, it is the only one I feel comfortable recommending. It’s great for complete beginners and also for people who have experience in another language and quickly want to familiarize themselves with Python.

Review: Introduction to Systematic Program Design – Part 1 — Coursera

Some years ago I tried reading How to Design Programs (HtDP) by Felleisen et al., which is an introductory computer science text book. It focuses on a special “design recipe” for designing programs and teaches programming in a Lisp-variant. The authors claim that students who start with HtDP and then proceed to study Java for one term tend to become better Java programmers than those who have been taught Java for two terms.

The premise of the book did sound tempting. Sadly, I found that the pace was way too slow to keep my attention. How to Design Programs was tedious to read because it was rather verbose, and, for my taste, there were too many comparatively insignificant exercises. What I also didn’t like was that the authors went out of their way to avoid mathematical examples, which occasionally led to programs that were more complex than they otherwise would have been. My memory is a bit foggy, but I vaguely remember an exercise in which you had to process a list that consisted of images, but the underlying principle could just as well have been illustrated through some basic mathematical operations on a list of integers.

I eventually put HtDP on the shelf and forgot about it, but I was reminded of it again when I learnt that Prof. Gregor Kiczales of the University of British Columbia was about to teach a course based on HtDP on Coursera, called Introduction to Systematic Program Design. I signed up because I still had an interest in learning the design recipe, and was curious whether it really helped turning you into a better programmer. The original announcement promised a course that would teach you the design recipe in a special “student language”, a variant of Lisp, and once you’ve made it through that sequence, you could pick a different language to tackle the same material again, and see how the design recipe would help you to become more productive in a possibly unfamiliar language more quickly.

Coursera logo of How to Design Programs

Coursera logo of How to Design Programs

Before I go on, I should probably talk about what the design recipe is. In short, it’s a systematic approach to designing programs. There are recipes for designing functions, data definitions, and even interactive programs that use Racket. Further recipes cover, for instance, function composition, backtracking search, or generative recursion. You shouldn’t think of those recipes as straightjackets but as guides. They may be unnecessary for simple programs, but with more complex ones, they allow to reason about the problem you want to solve in a more disciplined way.

Here is the recipe for designing functions, which I’ve taken from the course site:

;; Number -> Number
;; produces n times 2
(check-expect (double 0) (* 0 2))
(check-expect (double 1) (* 1 2))
(check-expect (double 3) (* 3 2))

;(define (double n) 0) ; this is the stub

;(define (double n)    ; this is the template
;  (... n))

(define (double n)
  (* n 2))

The first line describes the involved data types. To the left are the types that constitute the input, to the right the types that will be returned by the function. The next line is the purpose, which informs the reader about the function in English. The “check-expects” are tests for the function, but before you write those, you are supposed to write the stub. With the stub, you can then run the tests and check whether they were well-formed.

The templates were recommended for beginners, and were dropped in the latter half of the course. As the name implies, they give you a template for the actual function you’re going to write. Now, in the given example, this may look a bit silly, but when you’re working on a more complex function that requires, for instance, generative recursion, the templates are quite useful. There is a bit more to the design recipe as it extends to data definitions and recommendations for function decomposition, but their general outline should now be clear.

Introduction to Systematic Program Design was originally planned as a single course, spanning 14 weeks, if I recall correctly. However, this was changed and the course split into two parts. A starting date for part 2 has not been announced yet. However, Prof. Gregor Kiczales wrote that they might offer part I again first, aimed at students at the University of British Columbia first, before hosting part II on Coursera.

The course itself targets both beginners and experienced programmers. According to Prof. Kiczales, more experienced programmers who took his course said that it helped them improve their craft. I’m in the latter camp, and I subscribe to that notion. Introduction to Systematic Program Design took me a few dozen hours to complete, but I’ve probably saved half of that amount time on various programming projects already thanks to following the design recipe at times.

The course takes eight weeks and consists of eight units. There are eight weekly quizzes, two programming projects, and one final exam. To pass the course and earn a certificate, you have to have a score of 50% for a regular certificate of accomplishment, and if you want to earn a certificate with distinction, you need 80%. Homework counts 40% of the grade, the projects 30%, and the final exam 30%. I’ll say more about grading and the projects further down.

The first few weeks of the course are relatively slow-paced. If you’ve got no experience with programming, you should pay close attention and only proceed if you really understand all the concepts that are taught. The videos offer excellent support as they not only cover the necessary theoretical foundations, but also walk you through many examples. Experienced programmers can go through the first three weeks probably rather quickly. They cover data types, function composition, and how to model data. However, the course uses a Lisp-variant as its teaching language. If you come from an imperative background, you may find some of the assignments challenging. However, each week numerous practice exercises in various degrees of difficulty were provided that enable you to smoothly work your way up from the lecture material to the homework problems.

Week 4 introduced interactive programs. I was quite surprised how easy it was to animate some geometric shapes on screen. As the programs became more complex, I was pleased by the user-friendliness of the DrRacket environment, which was a real joy to use. In week 4 there also was a noticeable increase in difficulty. It should probably be pointed out that some of the practice exercises tended to be more challenging than the homework problems, which were assessed in the quizzes. One of the exercises had you code up a rolling lambda symbol, the DrRacket logo. It was supposed to change direction when you click the mouse, or when it hit the left or right border of the screen. I’m not sure the following still image can convey it clearly, but I hope it’ll do:

Rolling lambda, moving from right to left

Rolling lambda, moving from right to left

This problem involved rotation in positive and negative direction, translation on the x-axis, collision detection, as well as user interaction due to capturing mouse inputs. I noticed that there were many complaints in the forum about that exercise. I then checked my solution with the one that was provided, and could see what the issue was: the solution the course staff provided was tersely written and needlessly complicated, and therefore difficult to follow.

However, I was pleasantly surprised that the people involved with the course reacted quickly to the, in my opinion, justified complaints and revised their sample solution. Some days later an additional practice problem was added that slightly simplified the original problem. I found this attitude most impressive. From university I am used to a “sink or swim” approach, which is a stance you can encounter in some online courses as well. For instance, in Martin Odersky’s Functional Programming Principles in Scala there was an enormous discrepancy between the material in the lectures and the exercises, but the TAs thought this was no problem at all and only reiterated that this course was basically identical to the one at EPFL, and that they had no intention of making the material more accessible. What the TAs didn’t tell you, or didn’t want to draw attention to, though, was that at universities you don’t just have lectures and hand-ins, but also tutorial sessions or “labs” and office-hours. In one of the announcements Prof. Kiczales specifically made the point that the course, when taught at UBC, has a weekly three-hour lab session, which obviously can’t be replicated online, and that as a result, the online offering may be more challenging. However, instead of UBC only putting their course material online, they did their best to provide a supportive infrastructure, without watering down the course.

That this course wasn’t “dumbed down” became clear in the second half, which covered recursion and mutually referential types, eventually leading to generative recursion. Week 8 had, fittingly, some of the most challenging exercises which asked you to write programs that generate various fractals. The final project was an application of the backtracking search that was introduced in the lectures that discussed how to solve Sudoku puzzles. The variation was to write a solver for the 4-queens problem, with the option to generalize the solution to the n-queens problem. By week 8 I saw the value of the “design recipe”. While it looks like overkill in the first few weeks, which is a point Prof. Kiczales acknowledged himself repeatedly, it does make your life easier if you stick to it.

Judging from the discussions in the forum, a vocal minority seemed either unprepared for a programming course or unwilling to forget about “object-oriented programming”, adapt the mindset of a beginner and just learn the basics of Lisp. There were threads with literally hundreds of replies in which “seasoned software architects” tried to argue that Lisp was the wrong choice for the course and that it should be taught in Java or Python instead. This was quite tragic to see since Lisp and especially the “student language” we used can be described very concisely. I wonder what would have happened if those people just had sat down with an open mind and tried to learn the few language rules of Lisp instead of making a post about how “useless” and “weird” that language was.

Lastly, a word about grading: I had the impression that there was a dramatic rate of attrition throughout the course. While the forum seemed lively during the first few weeks, activity slowed down noticeably as the course went on. Yet, I would assume that concepts like “closure” or the traversal of arbitrary-arity trees would generate more questions than, say, the problem of concatenating two strings or changing the color of a rectangle. I’d be interested to see some retention statistics. In the latter weeks, it seemed that almost exclusively people with significant previous experience were populating the forums. One guy shared personal anecdotes from meeting John McCarthy, the creator of Lisp, for instance.

There were quite a few discussions in which Prof. Kiczales contributed, too. One thread I particularly enjoyed was devoted to optimization strategies for the Sudoku solver. While I avoided the forums in the first half of the course, I saw them becoming increasingly valuable in the second half. However, if my perception is correct, then the other side of the coin is that courses like this only increase the “digital divide”. It seemed that one of the promises of MOOCs was to bring higher education to people who would otherwise not have access to it, but I have the impression that it is mostly those who are advantaged already who make use of them. This is a more philosophical issue, but one that shouldn’t be ignored. I noticed that Udacity made great strides in offering preparatory courses that try bridge the gap between high school and university-level courses, so there is hope that MOOCs may one day indeed educate the masses and not just further the education of an intellectually curious minority. Making college-level courses more accessible by providing preparatory introductory and remedial courses is a challenge for Coursera and not necessarily UBC, though, so this is in no way intended to be a criticism of Introduction to Systematic Program Design.

After this diversion, let me now make some remarks about the grading. The homework problems are not automatically graded but evaluated through quizzes. Probably peer-review may be a better solution. In their current form the homework problems may be difficult to grade automatically since the majority of the exercises was such that there was more than one possible solution. I’m not sure a satisfying solution exists for this problem. For instance, MIT’s Introduction to Computer Science and Programming on EdX split up the more elaborate assignments and had students submit their solution in multiple parts. This was a tedious process, and not only because there were plenty of instances were the autograder expected a certain approach instead while rejecting a perfectly valid alternative one.

Speaking of projects, the first project was to write a simple text editor similar to what is found on old mobile phones:

Screenshot of the editor

Screenshot of the editor

This is a neat little interactive application. The program rejects invalid input, the cursor can be freely moved around to change the insertion point, and the backspace key deletes the character to the left of the cursor if there is any.

The second project was, as I said above, a solver for the 4 queens problem, based on a backtracking search. This was a very enjoyable exercise too, and one where it quickly paid off to follow the design recipe. There was a change in grading this last exercise, though. I suspect it had at least to some degree to do with the fact that the rate of attrition has, or at least seemed to be, very high. The deadline for the project was pushed back by four week to give copious amounts of time to finish the course material and the project. However, the final project was self-assessed, meaning that in theory you could just give yourself the highest score without doing the any work and thus get 15% of the course grade for free. I found this change of the course policy unsatisfactory. On the other hand, most probably enrolled in this course to learn something and not just to weasel their way to a PDF certificate.

The final exam was worth 30%. It was designed as the equivalent to a 24-hour take-home exam. It covered material drawn from all eight weeks of the course. The exam was not particularly challenging, but this may just be testament to the high quality of the teaching. Overall, Introduction to Systematic Program Design – Part 1 was a fantastic course, and I’m eagerly awaiting the second part.

Addendum:
After I wrote this review, Prof. Kiczales announced that Introduction to Systematic Program Design – Part 1 will be offered again on the 4th of September. Changes include an increase in the duration of the course form eight to ten weeks to even out the workload, the addition of a third project, and a change in the assessment of the homework problems.

No, Java is not a good first programming language

Recently someone reached this site through the search phrase, “is java a good first language 2013”, so I felt tempted to answer this question from my own highly subjective point of view. The short answer is no, it’s not. In fact, Java is one of the worst first languages you could pick. In the following I’ll highlight two main issues. One issue, the complexity of Java, will make life more difficult for you immediately, and the other, the danger of developing myopia regarding programming languages and their capabilities, has the potential to hurt you for many years to come and possibly your entire career.

For a beginner it can be exhilarating to make a computer bend to your will. However, the path to reaching even a low level of proficiency is paved with copious amounts of frustration. Writing code demands a level of attention to detail and a degree of logical thinking that is above what the average person could muster. Gaping holes in your logic are par for the course in, say, mainstream media or normal conversations, but a computer is not so forgiving. Just learning to proceed in a clear, logical way is difficult enough for a novice. But not only do you have to learn that. In addition, Java and other “industry-strength” languages burden you with their complexity and will therefore make it more difficult for you to learn programming.

One concept you’ll learn in CS101 is what computer scientists call “types” or “data types”. The declared type indicates how a data has to be interpreted, and determines how it can be manipulated. For instance, an integer with the value 42 is not the same as a string with the value “42”, even though they look the same when printed to the screen. You can multiply an integer by 2, but not a string. On the other hand, you can append another string to a string. All of this is probably not overly exciting. In Java, you have to declare variables with their type. In a more modern language like Python, which has been around for over two decades, you simply initialize the variable since it is dynamically typed. Here is an example if this sounds confusing. [Note: Static typing isn’t necessarily bad. However, the type system of Java doesn’t buy you much, unlike the type inference in Standard ML and other functional languages, but if this is an objection you wanted to raise, then you aren’t a beginner and shouldn’t consider yourself part of the target audience of this article.]

Let’s say you’ve got a list of integers and want to double each entry of the list. If you want to use Java for that, you’ll end up writing a nice little piece of code like this:

package demo;

public class RangeDemo {

	public static void main(String[] args) {
		
		int[] myArray = { 1, 2, 3, 4, 5 };
		
		for (int i = 0; i < myArray.length; i++  ) {
			myArray[i] *= 2;
		}
		
		System.out.println(myArray);
	}
}

If you’ve never seen code before in your life, this might look quite intimidating. You’ve got all this scaffolding, and plenty of keywords which represent concepts you have not encountered yet. You know nothing about packages, or classes. The syntax is baroque. What are all those curly braces about? What are the semicolons good for?

In fact, for a beginner writing something as simple as those few lines of code can pose a significant challenge in Java. This is not because the operations that are involved are so difficult to grasp. It’s basic arithmetic, after all. Instead, the reason is that Java throws so much unnecessary complexity at you. By the way, I’ve hidden an easter egg in that code. Try to run the example for yourself, and you’ll see what I mean. Come on, try it! I’ll wait for you.

Strange, isn’t it? The console gave you an output like “[I@4b71bbc9”. This is the location of the array in memory. To actually print the content of the array, you have to import the Array class, and then convert the array to a string representation:

package demo;

import java.util.Arrays;

public class RangeDemo {

	public static void main(String[] args) {
		
		int[] myArray = { 1, 2, 3, 4, 5 };
		
		for (int i = 0; i < myArray.length; i++  ) {
			myArray[i] *= 2;
		}
		
		System.out.println(Arrays.toString(myArray));
	}
}

As you see, this is a lot of code to do something very simple. Sadly, this approach scales. While your first programs will be a few dozen lines that should be just one or two, your larger programs will be several hundred lines instead of a few dozen, and in even more unfortunate cases several thousand instead of a few hundred. At this point, you’ll have heard of “patterns”, which were developed as a consequence to the deficiencies of the language and are largely superfluous in more advanced languages. [Peter Norvig discusses this problem in greater detail in Design Patterns in Dynamic Languages, but it won’t make much sense for a beginner. Maybe bookmark the link and come back after some months or a year.]

There is also the notion of “boilerplate code”, which is a problem in Java and other equally verbose programming languages. You might think that printing “hello world” should be one or two lines. Not in Java. Or you might think that a common operation like opening a file should be a line or two and consist of specifying the file name and the mode, i.e. whether the file is supposed to be only read or read and written to. In Java this isn’t quite so simple. Here is an example I’ve taken from Hackerrank:

import java.io.*;
import java.lang.*;

public class Solution
{
    public static void main( String[] args )
    {
        File fileName = new File( "myfile.txt" );
        if( !fileName.exists() )
        {
            System.out.println( "this file doesn't exist " );
            try
            {
                fileName.createNewFile();
                FileWriter fileWrite = new FileWriter( fileName );
                BufferedWriter bufferedWriter = new BufferedWriter( fileWrite );
                //bufferedWriter.write( "write something here " );
                bufferedWriter.close();
            } catch ( IOException e )
            {
                //catch exception
            }
        }
        else
        {
            //System.out.println( "this file exists " );
            try
            {
                byte[] buffer = new byte[ 100 ];
                FileInputStream inputStream  = new FileInputStream( fileName );
                int readLines = -1;
                while( ( readLines = inputStream.read( buffer ) ) != -1 )
                {
                    //System.out.println( new String( buffer ) );
                }
                inputStream.close();
            } catch ( IOException e )
            {
                //catch exception
            }
        }
    }
}

Yes, all this just to open and read a file! Also note that the use of the “BufferedWriter” in the first try block constitutes what I’ve once been told was a “pattern”. Now if you look at the try block in the else statement, you may wonder why there is no analogue like a “BufferedReader”. In this example, the file is read as an input stream. Last time I checked, the recommendation was to use “BufferedReader” since it provided better performance when reading large files, but this is nothing you have to worry about in CS101. But let’s not get lost in details. The point I’m trying to make is that code like the one above is, well, utter madness.

By the way, this is the equivalent in Python:

#!/usr/bin/python

filename = "myfile.txt"
with open( filename ) as f:
    # file read can happen here
    # print "file exists"
    print f.readlines()

with open( filename, "w") as f:
    # print "file write happening here"
    f.write("write something here ")

Again, the example has been taken from Hackerrank. The pound sign (#) indicates a comment and is not necessarily for program execution. Thus, you’re looking at a total of five relevant lines of source code. Yes, five.

Those were just two random examples, but they hopefully illustrate why choosing Java as a first language is a rather bad idea. I’m proficient to various degrees in about half a dozen languages. However, I don’t think I’ve ever found myself in a situation where I was writing code in, say, Python or Erlang and said to myself, “I wish I could do this in Java instead.” When I have to work in Java, though, I tend to feel reminded of the increased expressiveness of other programming languages.

Instead of Java as a first language, I’d recommend you learn Python instead. Python code can be very clean and readable, and dynamic typing makes it more fun, too. I haven’t given you the equivalent of the very first Java example yet, which was doubling the numbers of an array of integers. The Java code at the beginning of this article can be expressed in Python like this:

my_array = [1, 2, 3, 4, 5]
my_array = [ number * 2 for number in my_array ]
print my_array

If this is too verbose, you can shorten it a bit:

print [ number * 2 for number in [1, 2, 3, 4, 5] ]

This is what people mean when they say that one line of Python can replace ten lines of Java. As a beginner, you’ll probably find this code easy to understand, even if you’ve never seen code in your life.

Apart from all the unnecessary complexity Java throws at you, there is another aspect I’d like to highlight: programming language myopia. Many people find it very difficult to liberate themselves from the mental models and habits they have acquired in their first programming course. Instead of adopting a beginner mindset again when picking up a different language and learning what it offers, they view everything from the one perspective they know, and happily write Java in Scala or Java in Python or Java in any other language that lets them get away with it. They pronounce that differences between programming languages are “just syntax” and never realize that they don’t make use of more advanced features of the languages they use.

It’s too easy to be complacent when your code seemingly works. Your overly verbose Java in Scala does the job, so why would you bother stretching yourself a little bit? This attitude is common, and you’ll have no problem bumping into people who reject using more powerful languages because of their “weird syntax”. For more on this, read what Paul Graham has to say about Blub programmers. It may come in handy at a nerdy cocktail party. Variations of what Paul Graham calls the Blub Paradox can happen at all levels of competency. People think what they are doing is good enough and don’t make an additional effort. However, it is a missed opportunity if you view a more expressive language from the view of a less expressive one and never familiarize yourself with the possibilities more powerful languages offer.

For a possibly surprising example, let me tell you about one of the MOOCs I sampled earlier this year, MIT’s Introduction to Computer Science and Programming. There are many conveniences in Python, for instance an immediate variable swap. Swapping the content of the variables a and b requires nothing more than writing “a, b = b, a”. On the other hand, in Java and many other programming languages you would have to use a temporary variable and express the same in three lines. Of course, you can write the same three lines in Python and use the temporary variable you’re used to. It works, but it’s considerably less elegant. This is a relatively harmless example. More severe was the introduction to object-oriented programming in that course was taught from a clearly Java-inspired perspective. In Java it is necessary to write so-called getter and setter methods, sometimes called accessor and mutator methods. Yet, this is superfluous in Python, where you have direct access to attributes you want to access or modify. Now, if this happened in a CS course that was taught by MIT, then it’s probably safe to assume that a college freshman who is starting out with Java isn’t immune from that kind of programming language myopia either, so do yourself a favor and learn one or two other programming languages first.