Metacritic and the Legitimacy of Video Game Journalism, Part I

I recently bought an Xbox 360 since I was interested in playing a couple of recent games. Some of my buying choices were influenced by reviews on websites. Games like Red Dead Redemption were great, but others didn’t quite live up to my expectations. I’ve now had about a handful of experiences were the glowing reception of the games by reviewers didn’t at all match my subjective experience. Thus, I wondered how legitimate video game journalism really is.

I also noticed that some games I tremendously enjoyed, such as Vanquish, received a rather lukewarm reception. Also, I couldn’t help but notice that gaming websites are plastered with ads, so the obvious assumption is that those sites don’t want to bite the hand that feeds them and therefore promote “AAA titles”, while they spend little attention to games that cater to a niche audience. It’s also quite obvious that many mainstream reviewers don’t understand particular genres or, well, just plain suck at playing video games.

Here is a prime example: the Destructoid review of Vanquish by Platinum Games, written by Jim Sterling. He gave the game a 5 out of 10, and every single word he wrote indicates that he just didn’t get how to play this game. Vanquish isn’t a “cover-based shooter” in the vein of Gears of War but instead puts heavy emphasis on offensive play. The game is a bit more complex than in, say, Call of Duty, but you’ll get amply rewarded if you spend a few minutes to learn the controls. This was seemingly too much effort for Jim Sterling, so he wrote:

Sam [the protagonist] actually needs energy to punch his opponents, and once he’s landed a single successful punch, he can’t glide away since the energy meter completely drains. Several times, I punched an enemy, failed to kill it thanks to Sam’s inability to aim his punches properly, and was killed because I could neither defend myself or swiftly escape.

The issue is that the energy meter that enables you to perform more powerful attacks depletes as you do your little tricks. However, there is a risk/reward mechanism built in. If you completely replete the energy meter, your combat suit overheats. You then have to allow it to cool down, which makes you vulnerable to enemy attacks since you can neither defend yourself properly nor quickly evade. The core mechanic is therefore to find a rhythm for your attacks. This is not as bad as it may sound since the energy meter replenishes very quickly. I found the game mechanic to be highly satisfying, and with some practice, it’s quite easy to get into a state of flow. Frankly, I thought that Vanquish was absolutely fantastic and that it reaches a high-water mark for action games. Enthusiastic reception of this game by actual players, like on NeoGAF, seems to indicate that I’m not the only one who had been very impressed by it.

Jim Sterling’s review of Vanquish may be an egregious example, but the average games journalist is hardly an expert. Especially when it comes to niche games they don’t seem know what they are looking at. For instance, one of my favorite genres are shooting games (STGs). No, not the Call of Duty kind, but the modern descendants of Space Invaders. While most games strive to be “entertainment”, and therefore offer at best a moderate challenge, STGs are designed for repeated play throughs, with the goal being mastery so that you can eventually clear the entire game on just one credit. Depending on your skill level, this may take many months, and with the harder games you may never even get there because you’re just not good enough. This can be a humbling experience, but if you master a game like that, you’ll feel a sense of achievement which you just don’t get from games that hold your hand all the way through. Sure, it’s not for everyone, but those games have enthusiastic fans. Yet, the typical mainstream reviewer is quick to dismiss those games because you can just hit “start” again and “see everything in 15 minutes”. The thread Amusingly bad reviews on shmups.com collects statements like that. You can only shake your head.

I’ve now mentioned some examples of games or genres that tend to get short shrift by mainstream reviewers. Now let’s look at games that receive lavish praise, and whose faults either get ignored or justified. One prime example is one of the greatest commercial successes in recent years: Grand Theft Auto IV. It sold 25 million copies, and it’s the top rated Xbox 360 game on Metacritic. I think it’s a decent game, but it has its flaws, like a repetitive mission structure, poor driving, and clunky weapon mechanics. It’s not a bad game, but hardly the masterpiece it is claimed to be. Less than a quarter of the people who bought the game finished it. The other three quarters probably got bored or frustrated.

Another example is Resident Evil 5. It’s probably not a bad game once you get into it, but it doesn’t make it easy for you to like it. My main gripe is that your character controls like a tank. One of the very first scenes has you enter a shack. Then zombies start attacking you from two and then three sides. They first come running at you, but before they reach you, they seem to hit an invisible wall that makes them stop. From this point onward, they take turns attacking you, which results in incredibly awkward gameplay. This isn’t my idea of having fun, so I have yet to return to this game. When Resident Evil 5 came out, reviewers were defending the controls as “traditional Resident Evil gameplay”, and hordes of obnoxious gaming fanboys were eager to tell anybody who dared to criticize their favorite franchise a variation of “the controls are fine, maybe you just suck at the game.”

Seeing that some of the biggest games have quite startling flaws, I ended up wondering whether “AAA games” that are backed by multi-million advertising campaigns get much more praise than they deserve, and not because they are so great, but because “money hatting” buys good review scores. The big games normally don’t dare to be challenging, so even your average video game reviewer can play them. On the other hand, it seems that many journalist lack the knowledge and skills to appraise niche games, and are therefore quick to dismiss them. A glance at review scores on Metacritc, which contrasts user and “expert” opinions seemed to support this hypothesis. Looking for hard facts, I then made use of my programming skills and analyzed their data. I will share the results with you in the next post.

Backtracking Search in Python with Four Queens

The final project of Coursera’s Introduction to Systematic Program Design – Part 1 was to write a solver for the four queens problem. The goal is to place four queens on a 4 x 4 chess board so that the queens do not obstruct each other. This problem is a simplification of the eight queens problem, and it’s a good exercise for backtracking search. Please note that the Coursera course was using a Lisp dialect as its teaching language. Due to the Coursera honor code I cannot share that solution. However, since some students attempted to rewrite their solution in Python and discussed their approaches on the forum, I decided to port my solution to Python as well and present it here.

Given the limited problem space, there are other approaches to the four queens problem that may be more effective. Those may not teach you about constraint programming or backtracking search, though, and they probably don’t scale that well either. Just have a look at a 4 x 4 chess board: If you have the insight to put the first queen on the second square, then the problem basically solves itself!

To make it a bit easier to transition between the problem description and the eventual Python code, I’ll describe the positions on the board as integers, and not as a letter-integer pair as you’d see it on the chess board. The board therefore turns into:


 0  1  2  3
 4  5  6  7
 8  9  10 11
 12 13 14 15 

Let’s assume you want to implement a proper backtracking algorithm. Say, you start with an empty board. What would you then do? The key idea is to check every square and if this turns out to be unfeasible further down the line, discard that branch of the emerging solution tree and continue from a point that has not yet been explored. Concretely, this means that after the very first iteration of the program, when given an empty board, you’ve created a list of 16 boards, each containing a queen on a different square. The first board in the list has the queen on position 0, the second on position 1, and so on.

Let’s stick with the very first board for the time being. In the second iteration you’d create another list of boards that all contain one queen on square 0, and the second queen on one of the squares 6, 7, 9, 11, 12, 14. Pick any of those boards, put a third queen on them, and you’ll find that it is impossible to place a fourth queen. Seeing that it’s not possible to solve the 4-queens problem if you start with a queen on square 0, you then move on to the next square and try to solve the problem. This is indeed one of the solutions, so the algorithm should present a valid board at the end.

Having illustrated the problem, let’s now talk about one way to code up the solution in Python. I tend to avoid overly terse expressions, apart from a list comprehension here or there, so it should be readable with just a bit of commentary.

To make life a bit easier and the resulting code more readable, I made the following definitions:

B = False
Q = True
all_positions = range(16)

The variable “all_positions” is nothing but a veneer to make the code more legible. It represents the idea that the board is a list with 16 entries.

You might think that it’s silly to define B (for blank) as False and Q (for Queen) as True. However, you can then define a board more succinctly in list form. Here’s what the empty board looks like:

BD0 = [B, B, B, B,
       B, B, B, B,
       B, B, B, B,
       B, B, B, B,]

This list gets evaluated to a list with sixteen “False” entries, but the shorthand introduced above saves some typing and allows for easier illustration of the problem.

A queen attacks in three directions over the whole extent of the board, i.e. horizontally, vertically, and diagonally. Since the board is represented as a list of boolean values, and Python allows for index-based list access, the following definitions can be used to traverse the board easily and check whether there are any collisions:

rows = [ [  0,  1,  2,  3 ],
         [  4,  5,  6,  7 ],
         [  8,  9, 10, 11 ],
         [ 12, 13, 14, 15 ] ]

columns = [ [ 0,  4,  8, 12 ],
            [ 1,  5,  9, 13 ],
            [ 2,  6, 10, 14 ],
            [ 3,  7, 11, 15 ] ]

diagonals = [ [  1,  4 ],
              [  2,  5,  8 ],
              [  3,  6,  9, 12 ],
              [  7, 10, 13 ],
              [ 11, 14 ],
              [  2,  7 ],
              [  1,  6, 11 ],
              [  0,  5, 10, 15 ],
              [  4,  9, 14 ],
              [  8, 13] ]

Please note that this solution does not scale well. It works fine on a 4 x 4 board, but to extend it to an 8 x 8 board, you’d either have to expand the lists with fitting numerical values, or find a way to represent rows, columns, and diagonals as sequence definitions that can be generated automatically, but that’s a problem to look into later. Currently, I’m primarily concerned with an illustration of that particular search problem.

With the preliminaries behind us, I’ll now focus on a general strategy for solving the problem, before filling in the blanks and showing you the source code. The following also serves as a blueprint if you want to go ahead and write the program yourself. I prefer to decompose functions so that they, ideally, achieve one task only. This leads to a greater number of functions, but it makes it easier to reason over code, to debug, and to extend the program later on.

Following the description above, the main steps are something like this:

  • 1. enter a starting position (board)
  • 2. if that board is solved, return the board
  • 3. else: generate a list of boards with all valid subsequent positions, and continue with 1)

This gives rise to the question when a board is valid. To make the code a bit cleaner, I’ll assume that the set of inputs only consists of well-formatted lists, according to the previous specification. A board is valid if there is at most one queen in every column, row, and diagonal, which can be expressed like this:

def board_valid(board):

    def check_entries(entries):
        for entry in entries:
            if number_of_queens(board, entry) > 1:
                return False
        return True
    
    return all([ check_entries(x) for x in [ rows, columns, diagonals ] ])

The function number_of_queens() is defined separately:

def number_of_queens(board, positions):
    return sum([ 1 for pos in positions if board[pos] ])

A board is solved if it is a valid board and if there are exactly four queens on the board:

def board_solved(board):
    return isinstance(board, list) and board_valid(board) \
            and number_of_queens(board, all_positions) == 4 

With those functions in place, everything is set up to solve the problem. I tend to prefer a bottom-up approach when programming, and I visually represent this by putting the most general function last.

# find all empty squares
def get_positions(board):
    return [ x for x in range(len(board)) if not board[x] ]

def get_next_boards(board, positions):
    result = []
    for pos in positions:
        temp = copy.deepcopy(board)
        temp[pos] = True
        result.append(temp)
    return [ board for board in result if board_valid(board) ]

def solve_board(board):
    if board_solved(board):
        return board
    else:
        return solve_board_list(get_next_boards(board, get_positions(board)))
        
def solve_board_list(board_list):
    if board_list == []:
        return False
    else:
        check = solve_board(board_list[0])
        if board_solved(check) != False:
            return check
        else:
            return solve_board_list(board_list[1:])

I think this is quite readable. If the flow of execution is not quite clear, please refer to the description of the problem at the start of the article. Congratulations! You’ve now solved the four queens problem using backtracking search.

The entire code, with test cases and some further helper functions, like a visualizer, a generator for all valid starting positions, and a function that outputs all possible solutions, is available on my github page. Look for the file “4_queens.py”.

Review: Algorithms: Design and Analysis, Part 1 — Coursera

Udacity’s Algorithms: Crunching Social Networks is a neat course, but does focus heavily on graphs, as the title suggests. I was therefore looking for a more thorough treatment of algorithms, and Tim Roughgarden’s Coursera course Algorithms: Design and Analysis, Part 1 provided exactly that. I originally intended to write a review after finishing part 2, but there was so much content in the first part already that I dropped that idea.

Algorithms: Design and Analysis consisted of, as Prof. Roughgarden put it, “a selection of greatest hits of computer science.” It’s material any programmer or computer scientist should be familiar with. It is relevant whenever you work with algorithms and data structures, and also satisfying to study.

Here is the entire curriculum:

I. INTRODUCTION (Week 1)
II. ASYMPTOTIC ANALYSIS (Week 1)
III. DIVIDE & CONQUER ALGORITHMS (Week 1)
IV. THE MASTER METHOD (Week 2)
V. QUICKSORT – ALGORITHM (Week 2)
VI. QUICKSORT – ANALYSIS (Week 2)
VII. PROBABILITY REVIEW (Weeks 2-3)
VIII. LINEAR-TIME SELECTION (Week 3)
IX. GRAPHS AND THE CONTRACTION ALGORITHM (Week 3)
X. GRAPH SEARCH AND CONNECTIVITY (Week 4)
XI. DIJKSTRA’S SHORTEST-PATH ALGORITHM (Week 5)
XII. HEAPS (Week 5)
XIII. BALANCED BINARY SEARCH TREES (Week 5)
XIV. HASHING: THE BASICS (Week 6)
XV. UNIVERSAL HASHING (Week 6)
XV. BLOOM FILTERS (Week 6)

I particularly enjoyed the lectures on the master method, since they cleared up a few things for me. Some time ago I was working with matrix multiplication. From linear algebra I knew that this should lead to cubic runtime. However, my impression was that the algorithm ran faster than that, so I did some research and found out about Strassen’s algorithm. This one was covered in the lectures as well. What I viewed as mysterious back then was not only how Strassen came up with it in the first place — those strokes of genius are rarely explained — but also how one could make a statement as precise as that the algorithm was in O(n ^ 2.8074). Well, thanks to the master method I know now.

All the topics listed above you can find in your typical algorithms textbook. What you don’t get when working through a textbook, however, is the fabulous presentation of the material. The lectures are full of proofs and serious discussions, but Prof. Roughgarden knows how to keep your attention with a dry remark, or by quickly traversing language levels. In one second he’s speak of the raison d’être of an algorithm, and in the next he advises you “to bust out the Pythagorean Theorem” for one part of a proof. At that point I did rewind the video because I thought there was no way he could have said that.

The lectures overall were surprisingly entertaining. This was particularly the case whenever Prof. Roughgarden was discussing implications of analyses or the “big picture”. Here is a good example, taken from a brief lecture that discussed the necessity of knowing your data structures and their respective features well:

Levels of knowledge regarding data structures

Levels of knowledge regarding data structures

Level 1 was, according to Prof. Roughgarden, “cocktail party conversation competence, but of course I am only talking of the nerdiest of cocktail parties”. The sense of humor may not be to your liking, but if it is, you’ll be in for a treat. I don’t think I ever enjoyed listening to a lecturer in a technical subject that much.

Let’s talk some more about the presentation. I have gone through courses, both online and on-campus, that presented the material as if it had been received from God in its final form, with hardly any motivation or explanation, and instead just a litany of formulae and definitions. Prof. Roughgarden eventually also ends up with lengthy equations and formal definitions, but he develops the material as he goes along, not unlike Salman Kahn does it in his mathematics videos at Khan Academy. Here is a slide from one of the lectures on Dijkstra’s algorithm to illustrate this:

Dijkstra in color

Dijkstra in color

The many colors give a hint at the stages this drawing went through, but if this is too hectic for you, you can download the lecture slides as PDFs as well. Even typed versions were provided. Occasionally, there were optional PDFs with lengthy formal proofs for those with a greater interest in theory.

If you now said that this sounds as if the presentation was a bit whimsical, then I would agree with you. However, this does not mean that Algorithms: Design and Analysis wasn’t a thorough course. The problem sets and programming assignments required you to have a solid grasp on the material from the lectures. In particular the programming assignments required a good understanding of not only your programming language of choice but also of the algorithms and their supporting data structures. The difference between a good and a poor implementation could easily amount to several orders of magnitude. In one case, you get the answer almost instantly, and in the other your program might run for hours if not an entire day. Just imagine using repeated linear search over an array when the problem called for a hash table instead!

Overall, the programming assignments were a highlight of the course. Here’s a complete list, arranged by weeks:

  • Counting inversions
  • Quicksort
  • Karger’s algorithm
  • Computing strongly connected components
  • Dijkstra’s algorithm
  • 2 sum problem & Median maintenance

The first problem piggybacks on mergesort. The others were normally as straight-forward as they sound. However, the files that had to be processed were often quite large, which required some care when implementing the algorithms. Weeks 3 and 4 were the most challenging and also the most interesting problems. How well you’ll fare with the assignments may also depend on the language you chose. The grader only checks the numerical answer. How you get there is entirely your problem. You can chose any language you want, but some may be better suited than others.

The theory questions normally related to the homework in some way. I found it therefore helpful to only tackle them after I had successfully submitted the programming assignments. For some questions it may help to consult the optional text books. Prof. Roughgarden gives references for four textbooks, one of which is available for free online. The books were Cormen et al., Introduction to Algorithms, Dasgupta et al., Algorithms, Kleinberg and Tardos, Algorithm Design, and Sedgewick and Wayne, Algorithms. I found the free resource, Dasgupta, to be sufficient for the course.

In this offering, the problem sets and programming assignment counted 30 % each for the final grade, and the final exam made up the remaining 40 %. The final exam was fair, but not necessarily easy. However, it felt somewhat superfluous since the problem sets covered the material already, and some of the questions in the problem sets were more challenging. But partly this could have been because some topics were new for me back then, and when encountering them again in the final exam I was familiar with them already.

While I tremendously enjoyed the course, there were some problems, too, and most are related to the forums. Algorithms: Design and Analysis is not an introductory course. If you’ve gone through a solid CS101 course, and a survey course in data structures, maybe like Stanford’s free CS106A and CS106B, you should be well-prepared. If you lack this knowledge, then do yourself the favor and work on the basics first. However, quite a few people seemed to not have read the course description, according to which this course is appropriate for junior or senior level CS students. As a consequence, I mostly avoided the forums in the beginning because they were filled to the brim with basic or superfluous questions, and often with poor grammar and vocabulary. I couldn’t help but wonder why it almost always were people using Java who didn’t have a proper grasp of programming, the course content, and the English language. As the course progressed, fewer and fewer of those people were left, which increased the value the forums provided substantially.

Weak forum moderation is by far my biggest gripe with MOOCs. I think a sub-forum for beginners would have been a good idea. Forum moderators could then simply move all distracting threads there. An even better idea would have been some kind of entrance exam that checked basic competency. There certainly were some good discussions in the forum. Yet, the signal to noise ratio was at times as bad as on any “social media” site. In other words: the few good contributions were drowned by a myriad of barely legible or superfluous “me too” posts because people were too lazy to check whether a similar issue had been discussed already.

Speaking of the course material, one negative aspect was that no test cases for the programming assignments were provided. However, the text files we were given were normally so large that a manual check was not feasible. This was where the forum shined as some people posted test cases to work with. This was a good alternative to testing your algorithm on a 70mb text file. I’m sure many would have appreciated if the course staff had provided a number of test cases as this would have ensured a smoother experience.

Further, there was an issue that some people managed to implement algorithms that were able to process the small test cases, but which choked on larger files. When I took an algorithms course at university, we were given two or three input files per assignment. To pass the assignment it was sufficient if your algorithm could process the smallest input size. This was a fair approach since you showed that you could implement the algorithm. On the other hand, in Algorithms: Design and Analysis, your only choice was to correctly process the often very large input file. Therefore, I think that people were unfairly being punished for implementing an inefficient but principally correct algorithm. They are no better off than somebody who didn’t manage to implement the algorithm at all. But shouldn’t the former group at least have had a chance to earn partial credit?

While there were some minor issues with this course, I nonetheless think that Algorithms: Design and Analysis, Part 1 was great. Especially for an autodidact it’s probably a better solution to go through this course than any of the standard textbooks on your own. I highly recommend it, and I’m eagerly waiting for part 2.

Think Python Solutions on Github

I’m quite surprised how much interest there has been in my solutions of the Coding Bat exercises. Therefore, I’ve decided to publish my solutions of the end-of-chapter exercises from Allen Downey’s Think Python: How to Think Like a Computer Scientist as well. You can find them on my github page.

My solutions are almost complete. I only skimmed the chapters on turtle graphics, GUI (Tkinter), and most of object-oriented programming.

Allen Downey himself provides solutions to many of the exercises. You’ll find solutions to all exercises he doesn’t provide a solution to on my github page. The others aren’t duplicates, though. I have had a look at a few of Allen Downey’s solutions, and some were a bit different from mine. So, it may be worth checking out both if you decide to go through Think Python. You’ll probably learn something from both Allen Downey’s and my solutions.

Allen Downey’s Think Python: How to Think Like a Computer Scientist

Many textbooks are bloated, poorly structured, and badly written. Most seem to be quite useless without an accompanying college course, but if the course is well-taught, you can often skip the textbook altogether. Programming is no exception to this rule. As Python got more popular, the publishing industry started churning out one tome after another, and from what I’ve seen, they are often dreadful.

For a particularly bad example, look at Mark Lutz’s Learning Python, now in its 5th edition. It’s a staggering 1600 pages thick, and full of downright absurd examples that seem to consist of little more than manipulation of strings like “spam” and “egg”, and if you’re lucky, he will throw in the integer “42” as well. Mark Lutz’ book on Python is quite possibly the worst technical book I have ever encountered, but the other books I’ve sampled were not much better.

One thing they all seem to have in common is their inflated page count. I think this is simply a tactic of publishers to justify selling their books for a higher price. Adding another 500 pages costs very little with a decent print run. Yet, all that dead weight allows them to increase their retail price by 100 %. Apparently consumers have been misled to believe that a higher page count means that you’ll get a better bang for your buck, but the opposite is true.

On the other hand, Think Python: How to Think Like a Computer Scientist, in version 2.0.10, hardly exceeds 200 pages. Yet, Allen Downey manages to cover all basic programming constructs, recursion, data structures, and much more. He even details helpful debugging tips, and added a useful glossary for quick lookup of terms you may be unfamiliar with. File I/O got its own chapter. That’s not all. Towards the end of the book, he invites you to explore a GUI toolkit (Tkinter), object-oriented programming, and explains the basics of the analysis of algorithms. The amount of content in this book is quite staggering, especially when compared to its peers. Downey managed to organize his material very well, which resulted in a book that is slim, yet still feels complete.

What I particularly liked about Think Python is that the material is presented in a clear, logical order. Consequently object-orientation shows up very late. In fact, it is introduced as an optional feature. This is how it should be done, if you want to include OOP. For a beginner, “modern” OOP adds unnecessary complexity and makes it harder to form a clear mental model of the problem you are going to solve. On the other hand, competent experienced programmers tend to be highly critical of OOP. You’re probably better off if you never encounter it. That’s not (yet?) a mainstream opinion, though, so you may have to learn OOP eventually.

In a book like Liang’s Introduction to Programming Using Python, you get dozens of exercises at the end of each chapter. Most are a drudgery, focussing on minute details of the language. Often, they are just mere variations of one another, not unlike the mechanical “drills” that are popular in high school mathematics education. But this isn’t even the worst part of it. Nowadays, you can’t just pick up an old edition of a textbook and get basically the same product plus a few typos here and there. No, instead there are changes all over the place, not all of them particularly well-thought out, and the exercises get modified as well. Of course, compulsive updating and rushing a book to release makes it easy for new errors to find their way into the book, as evinced by long lists of errata of every new edition. On a side note, Liang’s books are particularly aggravating since you don’t even get the full book anymore. Instead, a good chunk of the content consists as “bonus chapters” that have to be downloaded by using a code that was printed in the book, presumably as an attempt to make buying used books less attractive.

Compared to that despicable strategy, you can get Downey’s book not only free of charge, but complete too. His exercise are often surprisingly engaging for a beginner’s text. Writing a simple recursive function that checks whether a word is an anagram is not very exciting. On the other hand, processing a text file that contains over 100,000 words, and finding the five longest anagrams in the English language is more involved, and to successfully solve that exercise, you have to draw from previous chapters. This makes Think Python particularly interesting for autodidacts since you can effectively use the exercises to check whether you have gained a firm grasp of the material. There is a reasonable number of exercises in the book, and they are well-chosen. It’s common that they systematically build upon each other. This should be normal for textbooks, but it’s an exception.

Think Python: How to Think Like a Computer Scientist can be freely downloaded at the homepage of the author. The book has been released under a free license. As a consequence, there are editions for different programming languages. Think Python itself is based on an earlier book that covered Java. Being freely available also led to a large audience for this book. Downey acknowledges dozens of people who have made suggestions or pointed out errors in the text. After about a decade, this now leads to Think Python being a very polished book. Of all the introductory Python textbooks I’ve had a look at, it is the only one I feel comfortable recommending. It’s great for complete beginners and also for people who have experience in another language and quickly want to familiarize themselves with Python.