Skip to content

2017

NFLPool Prototyping with MongoDB

Yesterday was a good day.

With the static pages for nflpool.xyz complete, I started thinking about the dynamic pages. These are going to require access to the database and I’ll be using MongoDB. I had started the MongoDB course from Talk Python, but put that aside to go back to the Python for Entrepreneurs course to get the site up using Pyramid.

I took a step back and did some brainstorming about the data model I’ll need for the database. I grabbed a spare whiteboard and started scribbling.

nflpool whiteboard brainstorming

Using MongoDB requires you to shift your mental model from traditional SQL and joining tables as MongoDB doesn’t technically do joins. I needed to think about what a collection looks like and do I embed more documents within a collection or have multiple collections?

I went back through the MongoDB course chapter on Modeling and Document Design. Mr. Kennedy has you ask 6 questions when it comes to embed or not embed:

  1. Is the embedded data wanted 80% of the time?
  2. How often do you want the embedded data without the containing document?
  3. Is the embedded data a bounded set?
  4. Is that bound small?
  5. How varied are your queries?
  6. Is this an integration DB or an application DB?

The answers that I came up (that are hopefully correct):

  1. Yes
  2. Almost always
  3. I’m such a newbie I don’t even know what a bounded set is.
  4. I think so.
  5. Not very.
  6. This is an application database.

Keeping in mind that MongoDB has a 16MB limit, which sounds a lot smaller than it really is as I’m only dealing with text and not embedding images or anything like that, the answer is to embed everything.

The next step was to go back through the MySportsFeeds APIs and figure out which ones I’ll be using. In no particular order:

  • Cumulative Player Stats (for individual player picks, such as the passing yards leader)
  • Roster Players (used for the league players to pick who the individual player leaders are)
  • Playoff Team Standings (Used for wildcard playoff picks)
  • Division Team Standings (Used for which teams will finish 1st, 2nd and last in each division)
  • Conference Team Standings (Used for the team that will lead its conference in points for as well as some team data, such as the team name, abbreviation, etc.)

I may want the full game schedule at some point, but that’s a bigger challenge than I need to get into right now.

I ended up deciding that I need two collections within my database:

  • Users: this will store the registration information for each player in the league and be used for logging into the website
  • Seasons: Here I will embed all of the data for each season of play and have a document for 2016, 2017, etc. Within each year I’ll embed the league player’s picks and then a document for each of the APIs above that has 17 embeds – one for each week of the NFL season. One of my goals is that player can go back to 2016 and look at their progress for each week and see their point total versus the rest of the league. Getting ahead of myself, this will not be available in MLBPool2 (if and when I ever build that) as that will only be a real time look at your score and then show the final year results.

So it may look something like this, with two collections: Users and Seasons:

nflpooldatabase

–Users (embed a document for each player in the league in this collection)

–Seasons (Example: “2016” – and then embed the following in the 2016 document:)

  • 2016
    • player-picks
      • 1 document for each player’s picks
    • Week 1 through 17 (17 documents total) with the NFL stats for that week embedded here:
      • AFC East
      • 11 more documents for division picks like AFC East above
      • 6 documents for individual leaders
      • Tiebreaker
      • Documents for Points For, Wildcard, etc.
  • 2017
    • player-picks
    • Weeks 1-17 (One document per week)
      • NFL stats with all the embedded documents above

Now it was time to re-build the functions to go get the data from the MySportsFeeds API. I had this working in the last iteration of the app when I was using SQLite, but I’ve never used MongoDB before. Over my lunch hour, I successfully prototyped taking one query and putting it into my MongoDB running locally.

The feeling of euphoria in successfully using MongoDB was huge.

Last night after work, I took the next step. Keeping in mind the size limitation of MongoDB, I could take steps to filter the API calls, especially for cumulative stats. I only need a few key stats for a subset of all players in the NFL. For example, I just need passing yards for quarterbacks. The MySportsFeeds API provides a ton of stats for every player – such as fumbles, passes over 20 yards, QB Rating, completions, and defensive stats (even though they’re an offensive player).

Thankfully, Brad Barkhouse of MySportsFeeds is always available in the MSF Slack channel. I couldn’t figure out how to build a filter for just certain positions and a specific stat. (It turns out it’s just an & sign). So if I just want sacks for defensive players, it looks like this:

https://api.mysportsfeeds.com/v1.1/pull/nfl/2016-2017-regular/cumulative_player_stats.json?position=LB,SS,DT,DE,CB,FS,SS&playerstats=Sacks

So my task for the weekend is to figure out if I want to just embed a document for each individual category or one document with all of the cumulative stats and then just build queries for each category I care about (sacks, interceptions, passing yards, etc.)

I probably should just focus on the next phase of the Python for Entrepreneurs training though, and get the user login and authentication built and then go through the Albums part of the training, which I’ll mimic for the the league players to submit their picks. I’m running out of time as I really have only a few weeks before I need the player picks to be submitted before the start of the season.

Prioritizing is fun. But I’m so happy with some of the breakthroughs and progress and not trying to think of all the challenges ahead and just take it one step at a time.

Python for Entrepeneurs Progress

I sat down excited at dinner last night excited to share with my wife the two things I learned in my Talk Python course yesterday. The first was learning the basics of CSS, something I’ve avoided for years. I’m not going to even pretend I understand CSS, but it’s a base knowledge to work with and there is still a whole chapter of applied front-end frameworks, so I’m sure there will be more on CSS.

The second was a cache-busting technique, making it easy to both develop a website and see the changes right away without having to clear the browser cache and great for users that they’ll see the updates in production when it happens.

As I follow along in the Python for Entrepreneurs class, I’m trying something different this time. Rather than code along with the examples and do the examples as Mr. Kennedy does them, I’m trying to build nflpool.xyz using similar code as to what is is in the training. There has been a couple gotchas doing it this way, as you’ll start a chapter doing something one way and then learn a different and better way to do it. Overall, I kind of like doing it the way I’m doing it as the hands-on applications is one of the ways I learn best.

Unfortunately, the cache-busting code broke Pyramid and I got a myriad of errors in my Chameleon templates. I didn’t realize this until after I had started the routing section of the training and then lost an hour or two trying to trouble shoot the cache-busting. I finally gave up and ripped out cache-busting code from the templates and everything is working again. Well, working without the cache-busting code.

As I worked on the routing section, I’m not going to say I truly understand it yet, but it started to click for me why using a framework like Pyramid using Python makes sense. When I’ve mentioned to a couple of people that I’m going to use Python and Pyramid to build a site, I’m usually asked why I just wouldn’t use Javascript like everyone else these days. For me, focusing on one language at a time and not trying to learn too much is key. I’m pretty sure that I could do everything I want to do with nflpool in Javascript (including both the game calculations and website), but Python’s readability and reputation for being a good language to start with really appeals to me. So why wouldn’t I build it in Python? It gives me more hands-on experience with the language, which I need, and I can include the code needed for the scoring right into the application and don’t have to build two things – the game scoring and a website.

I still have a lot to get through this week. I need to finish the applied web development including forms (yay! Maybe I will have the ability to take user picks up by next month. Now, don’t get ahead of myself…); then front-end frameworks with CSS and Bootstrap; the biggest of them all – databases (more on that in a second); as well as account management; and finally, deployment. I skimmed the deployment chapter Sunday night – lots of good stuff (even if they use an Ubuntu VPS in the training) and I’m excited to give Ansible a chance.

One of the nice things about the trainings from Talk Python is that Michael Kennedy offers office hours for a Q&A section if there is something you’re stuck on. The next one is tomorrow, which is perfect. I’m going to see if I can solve my cache-busting problem, and if not, maybe ask for help. I also want to ask him his thoughts on using MongoDB instead of SQLite after taking his MongoDB training last week, while Python for Entrepreneurs uses SQLite.

I’m really enjoying the class and glad I’m making time for it. I don’t know if I’m going to have a skeleton up by the end of the weekend or not, but it’s coming along.

Stay on Target (or why my Python app still isn’t built)

One of the things I’m not doing well is focusing on one task at a time. As I continue to learn Python, every time I across a way to do something, I want to implement it right away without thinking ahead of how all the different things work together. Then I’ll get stuck, and frustrated, and my pace slows.

I need to find a task, stay on target, and just finish it, rather than jumping from feature to feature. With this in my mind, I’ve taken a step back to think about what needs to get done.

There are two major things that need to be built:

nflpool.xyz

Using Pyramid, I need to get the website up – even if this is just a skeleton. The Python for Entrepreneurs will get me there. I need to follow through and finish the course._

_

Major features for the website include:

  1. User creation / management: This includes creating an account, resetting passwords and login / logout. The course does an awesome job of how to properly hash and salt the passwords – I just watched and worked on this chapter yesterday.
  2. Yearly Player Picks: I will need to create a form for each player, after they have created an account, to submit their picks. This form will need to talk to the database to display the list of teams in each conference and the players available in each of the positions. I briefly looked at the Pyramid documentation Friday night and something like WTForms might work for this, but I really know nothing about it at this point. From there the player will need to hit submit, then review their picks or make changes, and then submit their picks which are stored in the database.
  3. Scoring: The last section of the website is the most important part to each player – how are their picks doing against everyone else? One of the reasons I’m using a database is that the cumulative player stats that MySportsFeeds provides are just that – cumulative through the season. There isn’t a way to just get the quarterback stats for week 5 of the 2016 season – so I need to store each weeks stats in the databases. This way a player can track their progress in nflpool through the year. Want to see where they stand right now? Check. Versus two weeks ago? Check. So the website will need to default to the latest week and then let the player choose the year and the week to see past history.

The only downside at this time for creating the website is if I want to use sqlite vs mongodb – I’d prefer to use mongodb as I’m stuck on how to create the individual player picks table and wanted to try it as a key / value store in a mongodb collection. The course is focused on SQLite with SQLAlachemy – something I’d like to learn but I think mongodb might also be easier for taking the JSON from MySportsFeeds and just sticking it right in the database.

nflpool app

The app has two major features that need to be completed:

  1. Import data via JSON from MySportsFeeds into the database: I had all of this done using SQLite. If I choose to switch databases, I’ll need to rewrite this this code.
  2. Scoring calculations: This isn’t done at all. This depends on the player picks table in the database, which is where I was stuck a few months ago when I took a break. I can’t figure out the data model for it no matter how many times my wife tries to explain it to me and I don’t know if I’m just being optimistic when I think a key value store in mongodb would work better. I’m going to give this a bit more thought and actually write out what the document would look like. This would probably be an embedded document in each user’s account.

Next Steps

With all that said, I think working on the website and getting a skeleton up is the best next step. If I can get the site up, then start to work on the submission form for the picks (which will require a bit of importing team data into a database, so I’ll have to make a decision there), I think I’ll feel a lot better. In a perfect world – and I know this isn’t going to happen in the next 30 – 45 days when I really need it, would be to have the submission form working prior to the 2017 season starting. Even if I have to still calculate the points weekly like I’ve been doing for the last two years, at least I’d have the picks in the database this time instead of having to work with the challenge of Google Sheets.

MongoDB for Python for Developers

I’m taking the latest training course that just launched a couple of weeks ago from Michael Kennedy at Talk Python: MongoDB for Python for Developers. This is my first exposure to NoSQL. Over the last year, I’ve searched Google a few different times trying to understand what NoSQL without any success – it always went over my head. Within ten minutes of starting this course, I think I might understand what a document database is.

I took a break from coding the nflpool app a few months ago after my wife gave me some feedback on how I was designing the data model for the SQLite database I was using. I was pretty frustrated, not with her, but just my lack of knowledge. I still hadn’t figured out how to import the individual player pick’s from the Google Sheet I was using, though I did find some open source code that did it perfectly. The challenge was if I changed the data model, the import functionality was going to change significantly and I couldn’t figure it out.

Here it is, early July, and I feel the panic of not having the app built for the upcoming NFL season for the second year in a row. That’s ok – it’s a marathon, not a sprint, to learn Python and build the app. Everyone needs a hobby.

I’m going to create a new branch in nflpool and see if I can use MongoDB instead of SQLite. I need to sit down this weekend and give some thought and sketch out the data model for the Users collection, but I think it could (should?) work better than what I was planning. It’s already obvious that importing the the NFL statistics from MySportsFeeds via JSON directly into MongoDB should be a slam dunk.

The challenge in switching is twofold: First, I’ll need to understand how that changes the Python For Entrepreneurs course – as I’m going to use Pyramid for the web framework, I’ll need to understand how those will work together. This is especially true for the user accounts and database sections of the course.

The second risk is by switching to MongoDB from a SQL language, there will be no help available from my wife. I might drive her crazy with my questions and the way I ask them, but she has a lot of knowledge of SQL and it might be even more of challenge doing the database on my own, in addition to the Python.

I’m enjoying the MongoDB for Python for Developers course. To be fair, it’s definitely over my head – I’m not a real developer nor do I have any kind of database experience or know any Javascript, so I’m taking it slow and in chunks. I’m not coding the examples as I follow along yet – I’m going to audit the whole course, give some thought to confirm this is what I want to do, and then I’ll go through it again. It’s probably in my best interest to finish Python for Entrepreneurs and get the Pyramid web app up and running. I do enjoy Mr. Kennedy’s courses – the way the courses are structured, how each lecture builds on the others and his delivery makes them worth the money.  Even for some of the topics where I don’t have the prerequisite knowledge I probably should, I find myself learning.  I’m on vacation next week and plan to spend a good chunk of time going through both the Python for Entrepreneurs course and the new MongoDB course.

Why I enjoy writing user help for GNOME

It’s been almost ten years since I started contributing to open source projects.  One of the big ways I’ve contributed in the past is writing user help.  Not knowing how to code then (and still really don’t know now, as hard as I try to learn Python), writing is something I enjoy and an area where I think I can make a difference.

There are a number of different places to apply a writing skill in open source.  You can write release notes, marketing copy, websites and the help documentation for an application.  Writing user help is one I enjoy.

I want to say that writing help is easy – but what’s easy for me, may not be for others.  Those who write an application might say it is easy for them – but as I’m learning, it’s not easy for me.

You can debate who might want to use a Linux desktop and not use Microsoft Windows or macOS.  To me, there are a few different use cases:

  • Developers.
  • Hobbyists.
  • Users in developing nations.

It’s these last two groups that I think having up to date user help is important. Using a Linux desktop, such as GNOME, can be a big change and paradigm shift for a user.  In developing nations, they may not be able to afford a Windows license or the applications they might want to run on Windows.  For example, Photoshop isn’t cheap – but GIMP is free.  If you’re switching to a new operating system, there may be things you don’t know how to do and if there isn’t user help available, how else are you going to learn?  Especially if you’re an area that might not have good internet access.

But having started to learn to code in the last year, I understand why developers don’t write help.  Even with my terrible skills at writing code, when I’m writing a function in Python, I’m not documenting my code as I should be, much less writing a document about how to use the finished application (if it ever gets finished).  You get in the zone and just write code and tell yourself you’ll get to it later.

But on the other hand, when I start an application and can’t figure out how to do something, my first step is to see if I can figure out how to do it myself.  I’ll check to see if there is help built into the application, if not, I’ll check the website.  Having come back to using GNOME a month ago, I was dismayed to find an application I was excited to use to not have either when I was trying to figure out how to use a feature.  (I won’t shame them publicly, and no, they aren’t an application created by GNOME, it’s an actively developed application available on Github).

Although I’ve been using macOS for the last few years, it’s not as if I stopped using open source.  I have open source applications running on my Macbook, a laptop running Fedora though I didn’t use it much, a server at home also running Fedora, and Digital Ocean droplets running CentOS and Fedora.  I strongly believe in open source and love that it’s powered both by people and companies building software in the open for anyone to use or modify.  And that last sentence is important – if anyone can modify it or make it better, why wouldn’t I help if I have the time and / or the skills?

So I did.  Jumping back in with both feet.  After my wife gave me some feedback on the app I’m building that I needed to re-architect a large part of it, I took a break from it for the last week and wrote user help for two apps in GNOME: Polari, an IRC client, and Recipes, a brand new application that does exactly what you think it does.  I’m even poking around the documentation for Builder, an IDE for building GNOME apps, and editing its developer documentation.  (I won’t even pretend I know how to write developer documentation).  It’s a nice change of pace to use a different part of my brain to write user help while my subconscious figures out how I’m going to fix the data model in my app.

Having been away from the GNOME community for a number of years, I’ve always said the one thing I missed about open source was the people – and it’s been neat to be welcomed back and see some of the same faces.  I love the collaboration.  Maybe someday after I finish my Python webapp I’ll learn GTK and make myself a desktop app out of it.  But let’s not get ahead of myself.

NFLPool 0.1 milestone completed

I followed through on my last blog post and made a lot of progress over the weekend – the best way to learn is by doing.  I’ve updated my roadmap for nflpool and broke the development of the nflpool app into chunks:

  • 0.1: Database creation complete – write the Python code and SQL statements to create all the needed database tables using sqlite3.  This includes using the requests module to import all players in the NFL into the database from MySportsFeeds.
  • 0.2: Import the 2016 statistics from MySportsFeeds into the database. This includes everything needed to calculate an NFLPool player’s score: individual player statistics, division standings, Wild Card seeds, etc.
  • 0.3: Scoring calculations are complete – the app works. The nflpool app can take every player’s picks, compare it to the final standings, and output everyone’s score for this past 2016 season.
  • 0.4: If 0.3 can calculate the final 2016 standings, 0.4 will add functionality to step through every week individually for 2016 from weeks 1 through 17. This will have to be different code as it won’t use the requests module to get real time data, it will use the JSON data I downloaded weekly last year. This will help me prepare for the 2017 season proving that it can calculate the score each week until the season ends.
  • 0.5: The nflpool app now lives on its website, nflpool.xyz. This will include an online form for the 2017 season where players can make their picks and these picks are inserted into the database. This will be built on Pyramid (after I complete the Python for Entrepreneurs course from Talk Python to do this.)
  • 1.0: Full nflpool.xyz integration. Players can browse by week for the current season and past seasons.

After this weekend, the 0.1 milestone is complete. I ran into a few challenges, but the database is complete and I even have cumulative NFL Player stats imported as part of the 0.2 milestone. The first challenge I ran into was I could not get the CSV file imported into the sqlite3 database. We originally used a Google Form to capture each player’s picks. I saved that in Google Docs as a CSV file to be imported. I kept getting a too many values to unpack error and no matter how many times I compared the CSV columns to the SQL statement – it was expecting 47 and no matter how many times I checked and re-checked, I couldn’t find my mistake. After doing some Google searches, I came across this Python script on Github to import a CSV into sqlite – and it worked!

The second challenge I ran into today. I realized after importing the player’s picks and the NFL Player statistics that I was using NFL Player names in the CSV file but I was using the player_id, an integer, from MySportsFeeds for the database. Using the player_id is the correct way to do this, but I needed to modify the CSV and re-import. No problem, but after doing this, I realized I would need to do the same thing again for the Team picks – I need to use the team_id not the team name.

This is all now done and I can move on to the 0.2 milestone. Starting with the five picks for individual stats (passing yards, rushing yards, receiving yards, sacks and interceptions – all already imported using requests!), I’ll write a function that will compare a player’s picks to if the NFL player finished in the top three of that category and assign the correct points. I’ll then add an if statement to see if the nflpool player made a unique pick in that category, and if so, double the points earned.

From there I’ll move on to all the other categories such as Division Standings or Points For and use the same logic.

This is huge progress. The point calculations will be the hardest part of the app (outside of building the website) and now it’s time to see how much Python I’ve learned.

Writing Python to Learn

I’ve spent a lot of time on my Python journey watching videos, reading a lot of articles, reading Reddit and listening to podcasts trying to learn from osmosis. But everyone says the best way to learn is to have something you want to build and get to writing it.

I took a week of vacation in mid-February with a goal of buckling down and writing some code. That didn’t happen. I spent half a day getting my environment set up in Fedora; a half day researching Postgresql vs. MySQL and then getting MySQL set up on my development machine and on my server; a day of actual vacation (yay!), a day taking the latest Talk Python course (helpful – and cool!) and then a day spent trying to learn and figure out how to get MySQL working – which I was never able to.

Looking back, I wish I would have captured what worked well or wasn’t working in my journal at a minimum, so I could turn that into blog posts, or just blogged. When I started this journey to learn Python and build my two apps, I had every intention of doing exactly that. Everyone who has a blog has an intention to write in it – and how many actually do?

I find when sit down to code, one of two things happens. If things are going well, I lose track of time, and next thing I know I have to run the kids to hockey or basketball or it’s time for me to go to bed and I don’t recap what I’ve done. The other is I throw up my hands in frustration because it’s not working and I walk away – also not capturing where I’m stuck or why I’m frustrated.

So here we are again and I’m going to try harder to chronicle my journey. I had a good night last night in just sitting down and reviewing the nflpool code I had started. I’ve gone back to using SQLite as the SQL I had written to create the database tables works – making it work with MySQL wasn’t happening and I was sick of losing time and using it as an excuse.  Considering that there are less than 20 people in each of the two leagues, I ‘m not worried about performance right now.  The SQLite code works and I need to make some progress.

Three things I accomplished last night:

  • I created two additional branches in Git. I have a scratchpad branch – this is all my original code from six months ago. It’s terrible. I wasn’t writing functions, it’s not well organized, etc. This was my playground to experiment in trying to put the pieces together. I don’t want to lose these files, so I’ll store them in their own branch, but they won’t be used again. I created a develop branch – this is where I’m doing all my active development. When things are working as they should be, I’ll do a pull request and merge them into master. I don’t know if this is the “right” workflow, but it will work for me.
  • I had three or four different Python scripts to create the tables in SQLite. I created one Python file to create all of the tables I’ll need and created a function for each table. I tweaked some of the columns in a few of the tables after reviewing my data model, realizing that some tables didn’t capture the year or season. I added a main method to call all of these functions. I then deleted the Python scripts that did this individually and merged these changes into master.
  • Lastly, and maybe most important, when I was done for the night, I grabbed my notebook and made a to-do list of what to work on next. For example: one of the tables imports some information needed for the NFL Teams (their city, abbreviation, etc.) This data never changes, but I was importing it from JSON data I downloaded from MySportsFeeds. This needs to be re-written to make a request to the MySportsFeeds API to get the data rather than loading a file into memory. (Just in case anyway ever wants to re-use this code to run the same pool – I don’t ever see that happening, but it’s best to do it right the first time). This way I know where to pick up when I start again and should reduce the time reviewing the code to figure out what to work on next.

Progress!

Talk Python Training: Consuming HTTP Services in Python Review

Summary / tl;dr: Consuming HTTP Services in Python is a great addition to the training courses from Talk Python and Michael Kennedy. You’ll come away with a thorough knowledge of the best way to get data from the internet using the requests module; you’ll use real world examples and APIs from Basecamp, Github and a custom API Michael built just from the course; Michael will explain and show the concepts in an easy to learn manner with a little humor and recap each concept to make sure you understand.

In addition to being host of the well known Talk Python podcast, Michael Kennedy has also created a number of Python training courses. The first, Python Jumpstart by Building 10 Apps, launched its Kickstarter exactly a year ago this month, and was quickly followed later in the year with Python for Entrepreneurs on Kickstarter and Write Pythonic Code Like a Seasoned Developer.

I started and finished Python Jumpstart by Building 10 Apps late last year and loved it. It was a very different learning experience than the University of Michigan’s Python for Everybody class on Coursera. There is an assumption with the Talk Python training courses that you have some basic understanding of computer science or programming. I don’t, so I typically go a little slower and take my time with the courses.

Looking back. there are a few things I liked about the Jumpstart by Building 10 Apps course and I was glad to see continue in this latest course:

  • Michael makes it very easy to follow along in the beginning of the courses. Everyone learns differently, but one of the ways I learn best is to follow along by typing the code as he does in the video, helping me commit it to memory.
  • After teaching you a core concept and coding it into one of the apps, Michael recaps what you’ve just learned in its own “Concept” video. This summarizes the concept you just put into practice and reinforces what you’ve learned.
  • Compared to some of the other online courses I’ve taken, I really like that I know I’m learning from someone well known in the community and I believe I’m not just learning how to code, but coding best practices. I don’t know if I’m explaining this right, but as an example: A few of the online classes I’ve taken haven’t had me put the code into functions and then call them in a main(): function, for example.
  • The source code to the examples Michael teaches you is on Github. You can download it, star it, fork it – but it’s available if you want to follow along, code along as the course goes, or just save it for reference for the future.

I’ve shared my enthusiasm for the Talk Python training courses here and on Twitter and when Michael reached out to me last week asking if I was interested in having a sneak peek at his latest course, Consuming HTTP Services in Python, I jumped at it (after making sure he knew I was still a novice early in my Python learning curve). I took a look at the course overview and this is right in my wheelhouse of what I need to learn. A core part of the app I want to build is exactly what this course is about – using the requests module to download at least a half dozen JSON feeds and then building my app around that. (My app is to build the scoring for a custom NFL Pool league – it’s not a fantasy league, it’s different. All of the data comes from MySportsFeeds, who provides sports data via JSON or XML which I will consume, store in a database, and then write a Python program to calculate the league and player scores to be displayed on the league website.)

What I really liked about this course was that it was focused on one thing: consuming services. I’ve taken a few different Python courses online as I try and learn Python, and most are throwing all the basics that you need to know – everything you’d expect in a beginner course, but it does get overwhelming. This was the first course I’ve taken that was focused on getting you really good at one thing, and in a few different ways that you might need to do it.

Immediately, I learn something new. I only knew of requests from I learned using Google and Stack Overflow. When I started playing around and putting together the building blocks of my app, I wrote the following code. MySportsFeed currently using HTTP Basic Authentication, so I have a separate file called secret.py that stores my username and password – I may be new to Python, but I’m smart enough to have created that, import it and add it to my .gitignore file!

This code polls the Playoff Team Standings feed on MySportsFeeds and then I have some (ugly) Python code that runs a for loop to rank each of the two NFL Conferences teams from 1 to 16.

response = requests.get(
    'https://www.mysportsfeeds.com/api/feed/pull/nfl/2016-2017-regular/playoff_team_standings.json?teamstats',
    auth=HTTPBasicAuth(secret.msf_username, secret.msf_pw))

rawdata = response.content
data = json.loads(rawdata.decode())

And what did I learn? As I tweeted last week:

Now my code looks like this:

response = requests.get(
    'https://www.mysportsfeeds.com/api/feed/pull/nfl/2016-2017-regular/playoff_team_standings.json?teamstats',
    auth=HTTPBasicAuth(secret.msf_username, secret.msf_pw))

data = response.json()

It’s not a lot, it’s just one line of code, but it’s these little things. I had no idea the power of requests – this is just one specific example of something I learned from this course. Another thing I learned? I should be taking the URL in the above eample, create a base_url variable and then append the feed name as another variable. This is covered in a later chapter of the course – Consuming RESTful HTTP services. This chapter has a ton of great examples I’m going to be referencing when writing my app and using.

The Consuming RESTful HTTP services chapter is where the course really starts to take off. I ran into this with the Jumpstart course as well – Michael does a great job in teaching you the building blocks and then the course seems to go from 0-60. This is where having previous programming experience is helpful as that jump from learning what each puzzle piece does to how you put the puzzle together clicks. For someone like me, without any programming experience, it’s a big jump, but possible.

With that said, this chapter is fantastic. While I had a cursory knowledge of HTTP commands like GET and PUT, the API Michael built for the course is awesome. You have the opportunity to create your own examples and interact with the API and blog explorer app – this isn’t something you see with most online courses out there.

I also learned that I only want to use requests, and not built-ins. Though I do now have an understanding of the urlib built-in for Python 3.x if I’m ever cornered and have to use it.

I will admit to skipping the chapter on SOAP. I’m a hobbyist, not an enterprise developer who may encounter SOAP. But it’s great this available for those who may need it as part of this course. This, combined with learning how to use JSON, XML, and screen scraping makes it a complete course.

The last chapter is on screen scraping. There are a ton of of tutorials and classes available on the web about screen scraping. I’ve taken a few of them – one of the challenges I have with my app is figuring out the playoff seeding and I thought about scraping NFL.com, but that’s a different story. This chapter kicks off with an example of using a site’s sitemap.xml – an example I’ve never seen before that makes so much sense once you learn about it. And if a website you want to scrape doesn’t have a sitemap.xml, shame on them for not being search engine friendly. But if they don’t, Michael goes through other ways to scrape a website using Beautiful Soup and does it in the most Pythonic way I’ve seen yet in a course.

I enjoyed Consuming HTTP Services in Python. With the requests module and JSON being a cornerstone of the app I hope to write, it was great to learn about everything I need to know to make that happen. Michael’s delivery is conversational and he makes it easy to follow along and do the code examples with him, if you choose to. If you have programming experience or are coming from a different language, the videos themselves will probably teach you what you need to do in Python. If you’re like me, a complete novice to Python, you’ll be able to follow along, but be prepared for the jump the course will make in the Consuming RESTful HTTP Services chapter – this moves pretty quickly, but if you’ve forked the Github repo you’ll have access to the program Michael has written and you can (and should) write your own examples to interact with the API on the blog explorer. For $39, you’re getting a well developed course from someone well known in the Python community teaching you the Pythonic way interact with services. While other online training sites might have “sales” that are cheaper, as someone new to Python who has taken some of those courses, trust me – the Talk Python courses are well worth the money.

I’m still early in my Python journey and the two courses I’ve finished from Talk Python have been the best learning resources I’ve used out of all the books and training I’ve purchased (and it’s a lot). I’m still working my way through Python for Entrepreneurs and am really looking forward to two of the upcoming courses using SQLAlchemy as this database stuff is way over my head right now. Thanks again to Michael for allowing me to have a preview of the Consuming HTTP Services course – now it’s time for me to take his advice from the last chapter of the course and write some code – the best way to actually learn.

The macOS apps I’ll miss the most

I have been considering switching back to GNOME full-time and finally pulled the trigger last week and did, installing Fedora 25 on both my iMac and MacBook Pro. I installed GNOME on my iMac a couple months ago, but didn’t do the installation correctly and screwed up my MBR, resulting in only GNOME being an option. I’ve fixed that this time and have kept dual boot (for just in case and for iTunes on my iPhone and iPad).

The more I’ve thought about this over the last couple months, the more I have wanted to go back to GNOME. The privacy concerns I have about the big tech companies continues to nag at me and there is something about the open source ethos that appeals to me. I may even switch back to Android from iOS if this works well.

I will still be tied to the Apple ecosystem with my work laptop. That’s both good and bad as I think about the few apps that have held me back from making the switch full time. The only alternative would be to switch to Windows, which is never going to happen. I haven’t used Windows since 2004 and considering what Microsoft has done with tracking in Windows 10…

There are a handful of apps on macOS that just don’t have a Linux equivalent, or if they do, aren’t close from a usability experience. The last three are the big ones for me. I also see the irony in that those three apps are some of most expensive applications I’ve purchased through the Mac App Store. You do get what you pay for and I really shouldn’t be comparing these, especially the last two which Apple has featured as apps of the year previously, to free and open source apps. I should be grateful that there are programmers out in the open source world making applications and offering them without charge rather than trying to compare them to Mac equivalents.

In no particular order, the apps I’ll most the most:

Messages

I love text messaging from my desktop (and the immediacy of the notifications). I’m old, shouting Get Off My Lawn and just don’t like tapping on virtual keyboards compared to a real keyboard hooked up to a computer. But I can live without this.

Status: Can live without this.

Pocket

The web client is pretty good and I’ll probably continue to use the iPad as the primary reading device for Pocket. I can live without this. Firefox has a save to Pocket add-on that works just fine.

Status: Can live without this.

Reeder

Reeder is my RSS reader of choice, and there are a number of RSS readers available on Linux. Feedbin, the replacement service for Google Reader that I pay for annually, also has a decent web interface. New links open in a tab in the browser instead of Reeder’s readability feature. I’ll miss Reeder.

Status: Can live without it.

Update: I’ve found FeedReader in the Fedora 25 repositories. Version 1.6 is in the repo, but the developer has also made a Flatpak available for version 2.0 that was released two days ago and I’m now running. A few thoughts:

  • This has fantastic usability. Almost to the level of Reeder. This is a slam dunk as far as RSS readers go.
  • I installed the Flatpak because version 2.0 adds support for both Feedbin and Pocket as a read it later service. Feedbin suport is working great and after upgrading from the 2.0 beta to 2.0 final, Pocket support is working flawlessly. FeedReader automatically added Pocket as a service since I had it configured in GNOME Online Accounts.
  • A big thank you and shout out to the developers for taking the time to release a Flatpak making it easy for users to upgrade to the latest version.

Updated Status: Found a replacement that is just as good as one of the best Mac apps.

1Password

Considering all the work I did over the Christmas holiday to change weak passwords to strong passwords and removing duplicates, and also the integration with iOS, this is a big loss as there is no Linux client for 1Password. There are a few password management alternatives on Linux, but I don’t know how good they are. Ryan C. Gordon aka icculus did write a 1Password script for Linux that may be worth checking out: https://icculus.org/1pass/

Status: More research needed and may just need to switch to Encryptr or Enpass.

Tweetbot

Ouch. This one hurts. I love Twitter, it’s the only social network I’m active on. I love syncing my Twitter reading experience between all my devices, which Tweetbot does better than any other application out there, regardless of platform or operating system. I’ve installed Corebird on Fedora and it’s ok, but it’s not Tweetbot.

Status: This one hurts. I can probably confine myself to Twitter on iOS and use Pocket to save and read links.

Ulysses

I love, love, love writing in Ulysses. It’s hands down the best writing app I’ve ever used after trying Scrivener, Hemingway and others. The iCloud integration is great, making it easy to jump to and from other devices, including iOS. I am using Ulysses to not only write for my blog and journal (then importing into Day One) but also as an Evernote replacement after Evernote screwed everyone over with their privacy settings (though they would later backtrack, I’ve lost all trust in them). Like most of the great Mac apps, they’re Apple only. If I’m writing anything, I’m always starting in Ulysses.

I’m using Dropbox Paper right now to try it out as a replacement for Ulysses, and while Paper is close, it’s lack of true Markdown support while writing bugs me. It’s not too bad if I open it in its own browser window and then use it in its own workspace – this makes it feel like more of a writing app and not a browser. I’ve spent significant time learning Markdown for both Ulysses and Day One, so Dropbox Paper missing real keyboard shortcuts for Markdown kind of sucks (some work, like strong and italics, but others, like headings, don’t). I’ve installed the Markdown plugin in WordPress, making it easy to copy and paste drafts from Ulysses to my blog or to Day One. It is possible to export Dropbox Paper as Markdown and after a cursory glance there are some decent looking Markdown editors available on Linux, so there may be hope.

Status: Can probably live without it. But I’m not happy about it.

Day One

This is probably the biggest one for me. If I love Ulysses, I love Day One more. And like Ulysses, Day One is exclusively in the Apple ecosystem. Ironically, I don’t write in my journal nearly as much as I should. But I love the integration with IFTTT and use it to track all of my exercise entries from Endomondo. I spent an hour looking at journaling options on Linux last week, and there are a couple, but I don’t see a way to sync the entries between computers, which is a must have feature. One option is to continue to use Day One on my work laptop or use a Markdown editor on Linux, save in Dropbox, and then import. I’ve also come across jrnl, a command line journaling app that says it works with Day One, but I really love the user experience of Day One’s app. This one hurts the most – Day One was one of the first apps I ever bought in the Mac App Store and I have years of journal entries in there.

Status: Ouch. I really don’t want to miss this. I’m not ready to start journaling in another app, so I’ll probably just write drafts in Dropbox Paper and then use my work laptop for journal entries.

Why I’m going back to Linux after five years of using macOS

I’ve been a supporter of the Electronic Frontier Foundation since 2004. Their work on privacy, free expression and technology are all things I am passionate about. For the last year or so, I have become more concerned with privacy issues in technology. The rise in big data and how everything is tracking everything we do has given me significant concerns. I’ve been giving a lot of thought to which ecosystems I want to stay in. I’m not going to say I trust any of these technology companies, but I can control (or minimize) my footprint with some of these companies.

Last year I took a number of steps in this direction:

  • I deleted my Facebook and Instagram accounts. I don’t think I need to go into detail here, but Facebook isn’t something you would ever equate with the word “privacy”.
  • After Evernote said they would access your notes and data (only to backtrack later) I quickly stopped using Evernote.
  • I’m paying cash for most of my personal purchases and now shopping local and not online – even if I have to pay a bit more for things such as records, books or cycling gear.
  • I went through and deleted over a hundred online accounts over the Christmas break and used a password manager to make sure I wasn’t using duplicate passwords online and also that I was using secure passwords.
  • I’m no longer using Flickr (and Yahoo services in general) for my photos and I have a tough decision to make about whether I delete that account and remove access to the photos there. (Wikipedia using a number of my Green Bay Packer photos under a Creative Commons license).
  • I switched to DuckDuckGo instead of Google as my default search engine.
  • As much as I’m intrigued by Amazon’s Alexa and Google Home, I won’t buy a voice activated device. Just think about what data it knows about you – what smart devices in your house, what your saying around it – and the recent story in the news how a police department wants the data scares the shit out of me.
  • I’m not using TouchID on my iOS devices. Courts have ruled multiple times that your fingerprint is not protected under the Fifth Amendment – but a passcode is.

Yes, I sound paranoid. But at the end of the day, this is my decision and my choice. I may not have anything to hide, but I don’t believe just because we have the technology means that it always needs to be used to collect everything about you. While I will never be able to erase everything about me online or with these technology companies – nor would I necessarily want to – I can control with whom I do business and make conscious choices about it. This way I can be eyes wide open that yes, I’ve been using Gmail since it first launched and that Google knows almost everything about me. But that’s my choice to stay within Google’s ecosystem (for now). even if I start to use less of their services, such as switching to DuckDuckGo for internet searches.

I stopped using Microsoft Windows in 2003 when I switched to using Linux full time until about 2012 when I started using macOS after buying my first MacBook. I love Apple’s hardware and I like macOS – the same Unix internals underneath, lots of polish, and excellent apps. Everything just works – you don’t have to fiddle with video card drivers or wireless. But you will have to do things the way Apple wants you to (see: iTunes). Integreation with iOS is great – answer phone calls on your Mac, reply to text messages. But who knows what Apple is tracking as well as the apps you’re using (I’m looking at you Evernote). And don’t get me started on the Touch Bar on the new MacBooks. (No Escape key? Really?)

So I’m going back to using Linux on the desktop after five+ years away. There is no question that the macOS user experience is significantly better. But using the GNOME desktop on Fedora is pretty close and gets better every release. I’ll know my computing experience is secure and private. I’ll probably share some thoughts on what key applications I’ll miss most in a separate blog post. I’ll still need to use macOS at my day job, but I can control what I use at home and have the peace of mind that nothing is tracking me (outside of what’s in my web browser) when using my own computers.