Skip to content

python

NFLPool Prototyping with MongoDB

Yesterday was a good day.

With the static pages for nflpool.xyz complete, I started thinking about the dynamic pages. These are going to require access to the database and I’ll be using MongoDB. I had started the MongoDB course from Talk Python, but put that aside to go back to the Python for Entrepreneurs course to get the site up using Pyramid.

I took a step back and did some brainstorming about the data model I’ll need for the database. I grabbed a spare whiteboard and started scribbling.

nflpool whiteboard brainstorming

Using MongoDB requires you to shift your mental model from traditional SQL and joining tables as MongoDB doesn’t technically do joins. I needed to think about what a collection looks like and do I embed more documents within a collection or have multiple collections?

I went back through the MongoDB course chapter on Modeling and Document Design. Mr. Kennedy has you ask 6 questions when it comes to embed or not embed:

  1. Is the embedded data wanted 80% of the time?
  2. How often do you want the embedded data without the containing document?
  3. Is the embedded data a bounded set?
  4. Is that bound small?
  5. How varied are your queries?
  6. Is this an integration DB or an application DB?

The answers that I came up (that are hopefully correct):

  1. Yes
  2. Almost always
  3. I’m such a newbie I don’t even know what a bounded set is.
  4. I think so.
  5. Not very.
  6. This is an application database.

Keeping in mind that MongoDB has a 16MB limit, which sounds a lot smaller than it really is as I’m only dealing with text and not embedding images or anything like that, the answer is to embed everything.

The next step was to go back through the MySportsFeeds APIs and figure out which ones I’ll be using. In no particular order:

  • Cumulative Player Stats (for individual player picks, such as the passing yards leader)
  • Roster Players (used for the league players to pick who the individual player leaders are)
  • Playoff Team Standings (Used for wildcard playoff picks)
  • Division Team Standings (Used for which teams will finish 1st, 2nd and last in each division)
  • Conference Team Standings (Used for the team that will lead its conference in points for as well as some team data, such as the team name, abbreviation, etc.)

I may want the full game schedule at some point, but that’s a bigger challenge than I need to get into right now.

I ended up deciding that I need two collections within my database:

  • Users: this will store the registration information for each player in the league and be used for logging into the website
  • Seasons: Here I will embed all of the data for each season of play and have a document for 2016, 2017, etc. Within each year I’ll embed the league player’s picks and then a document for each of the APIs above that has 17 embeds – one for each week of the NFL season. One of my goals is that player can go back to 2016 and look at their progress for each week and see their point total versus the rest of the league. Getting ahead of myself, this will not be available in MLBPool2 (if and when I ever build that) as that will only be a real time look at your score and then show the final year results.

So it may look something like this, with two collections: Users and Seasons:

nflpooldatabase

–Users (embed a document for each player in the league in this collection)

–Seasons (Example: “2016” – and then embed the following in the 2016 document:)

  • 2016
    • player-picks
      • 1 document for each player’s picks
    • Week 1 through 17 (17 documents total) with the NFL stats for that week embedded here:
      • AFC East
      • 11 more documents for division picks like AFC East above
      • 6 documents for individual leaders
      • Tiebreaker
      • Documents for Points For, Wildcard, etc.
  • 2017
    • player-picks
    • Weeks 1-17 (One document per week)
      • NFL stats with all the embedded documents above

Now it was time to re-build the functions to go get the data from the MySportsFeeds API. I had this working in the last iteration of the app when I was using SQLite, but I’ve never used MongoDB before. Over my lunch hour, I successfully prototyped taking one query and putting it into my MongoDB running locally.

The feeling of euphoria in successfully using MongoDB was huge.

Last night after work, I took the next step. Keeping in mind the size limitation of MongoDB, I could take steps to filter the API calls, especially for cumulative stats. I only need a few key stats for a subset of all players in the NFL. For example, I just need passing yards for quarterbacks. The MySportsFeeds API provides a ton of stats for every player – such as fumbles, passes over 20 yards, QB Rating, completions, and defensive stats (even though they’re an offensive player).

Thankfully, Brad Barkhouse of MySportsFeeds is always available in the MSF Slack channel. I couldn’t figure out how to build a filter for just certain positions and a specific stat. (It turns out it’s just an & sign). So if I just want sacks for defensive players, it looks like this:

https://api.mysportsfeeds.com/v1.1/pull/nfl/2016-2017-regular/cumulative_player_stats.json?position=LB,SS,DT,DE,CB,FS,SS&playerstats=Sacks

So my task for the weekend is to figure out if I want to just embed a document for each individual category or one document with all of the cumulative stats and then just build queries for each category I care about (sacks, interceptions, passing yards, etc.)

I probably should just focus on the next phase of the Python for Entrepreneurs training though, and get the user login and authentication built and then go through the Albums part of the training, which I’ll mimic for the the league players to submit their picks. I’m running out of time as I really have only a few weeks before I need the player picks to be submitted before the start of the season.

Prioritizing is fun. But I’m so happy with some of the breakthroughs and progress and not trying to think of all the challenges ahead and just take it one step at a time.

Python for Entrepeneurs Progress

I sat down excited at dinner last night excited to share with my wife the two things I learned in my Talk Python course yesterday. The first was learning the basics of CSS, something I’ve avoided for years. I’m not going to even pretend I understand CSS, but it’s a base knowledge to work with and there is still a whole chapter of applied front-end frameworks, so I’m sure there will be more on CSS.

The second was a cache-busting technique, making it easy to both develop a website and see the changes right away without having to clear the browser cache and great for users that they’ll see the updates in production when it happens.

As I follow along in the Python for Entrepreneurs class, I’m trying something different this time. Rather than code along with the examples and do the examples as Mr. Kennedy does them, I’m trying to build nflpool.xyz using similar code as to what is is in the training. There has been a couple gotchas doing it this way, as you’ll start a chapter doing something one way and then learn a different and better way to do it. Overall, I kind of like doing it the way I’m doing it as the hands-on applications is one of the ways I learn best.

Unfortunately, the cache-busting code broke Pyramid and I got a myriad of errors in my Chameleon templates. I didn’t realize this until after I had started the routing section of the training and then lost an hour or two trying to trouble shoot the cache-busting. I finally gave up and ripped out cache-busting code from the templates and everything is working again. Well, working without the cache-busting code.

As I worked on the routing section, I’m not going to say I truly understand it yet, but it started to click for me why using a framework like Pyramid using Python makes sense. When I’ve mentioned to a couple of people that I’m going to use Python and Pyramid to build a site, I’m usually asked why I just wouldn’t use Javascript like everyone else these days. For me, focusing on one language at a time and not trying to learn too much is key. I’m pretty sure that I could do everything I want to do with nflpool in Javascript (including both the game calculations and website), but Python’s readability and reputation for being a good language to start with really appeals to me. So why wouldn’t I build it in Python? It gives me more hands-on experience with the language, which I need, and I can include the code needed for the scoring right into the application and don’t have to build two things – the game scoring and a website.

I still have a lot to get through this week. I need to finish the applied web development including forms (yay! Maybe I will have the ability to take user picks up by next month. Now, don’t get ahead of myself…); then front-end frameworks with CSS and Bootstrap; the biggest of them all – databases (more on that in a second); as well as account management; and finally, deployment. I skimmed the deployment chapter Sunday night – lots of good stuff (even if they use an Ubuntu VPS in the training) and I’m excited to give Ansible a chance.

One of the nice things about the trainings from Talk Python is that Michael Kennedy offers office hours for a Q&A section if there is something you’re stuck on. The next one is tomorrow, which is perfect. I’m going to see if I can solve my cache-busting problem, and if not, maybe ask for help. I also want to ask him his thoughts on using MongoDB instead of SQLite after taking his MongoDB training last week, while Python for Entrepreneurs uses SQLite.

I’m really enjoying the class and glad I’m making time for it. I don’t know if I’m going to have a skeleton up by the end of the weekend or not, but it’s coming along.

Stay on Target (or why my Python app still isn’t built)

One of the things I’m not doing well is focusing on one task at a time. As I continue to learn Python, every time I across a way to do something, I want to implement it right away without thinking ahead of how all the different things work together. Then I’ll get stuck, and frustrated, and my pace slows.

I need to find a task, stay on target, and just finish it, rather than jumping from feature to feature. With this in my mind, I’ve taken a step back to think about what needs to get done.

There are two major things that need to be built:

nflpool.xyz

Using Pyramid, I need to get the website up – even if this is just a skeleton. The Python for Entrepreneurs will get me there. I need to follow through and finish the course._

_

Major features for the website include:

  1. User creation / management: This includes creating an account, resetting passwords and login / logout. The course does an awesome job of how to properly hash and salt the passwords – I just watched and worked on this chapter yesterday.
  2. Yearly Player Picks: I will need to create a form for each player, after they have created an account, to submit their picks. This form will need to talk to the database to display the list of teams in each conference and the players available in each of the positions. I briefly looked at the Pyramid documentation Friday night and something like WTForms might work for this, but I really know nothing about it at this point. From there the player will need to hit submit, then review their picks or make changes, and then submit their picks which are stored in the database.
  3. Scoring: The last section of the website is the most important part to each player – how are their picks doing against everyone else? One of the reasons I’m using a database is that the cumulative player stats that MySportsFeeds provides are just that – cumulative through the season. There isn’t a way to just get the quarterback stats for week 5 of the 2016 season – so I need to store each weeks stats in the databases. This way a player can track their progress in nflpool through the year. Want to see where they stand right now? Check. Versus two weeks ago? Check. So the website will need to default to the latest week and then let the player choose the year and the week to see past history.

The only downside at this time for creating the website is if I want to use sqlite vs mongodb – I’d prefer to use mongodb as I’m stuck on how to create the individual player picks table and wanted to try it as a key / value store in a mongodb collection. The course is focused on SQLite with SQLAlachemy – something I’d like to learn but I think mongodb might also be easier for taking the JSON from MySportsFeeds and just sticking it right in the database.

nflpool app

The app has two major features that need to be completed:

  1. Import data via JSON from MySportsFeeds into the database: I had all of this done using SQLite. If I choose to switch databases, I’ll need to rewrite this this code.
  2. Scoring calculations: This isn’t done at all. This depends on the player picks table in the database, which is where I was stuck a few months ago when I took a break. I can’t figure out the data model for it no matter how many times my wife tries to explain it to me and I don’t know if I’m just being optimistic when I think a key value store in mongodb would work better. I’m going to give this a bit more thought and actually write out what the document would look like. This would probably be an embedded document in each user’s account.

Next Steps

With all that said, I think working on the website and getting a skeleton up is the best next step. If I can get the site up, then start to work on the submission form for the picks (which will require a bit of importing team data into a database, so I’ll have to make a decision there), I think I’ll feel a lot better. In a perfect world – and I know this isn’t going to happen in the next 30 – 45 days when I really need it, would be to have the submission form working prior to the 2017 season starting. Even if I have to still calculate the points weekly like I’ve been doing for the last two years, at least I’d have the picks in the database this time instead of having to work with the challenge of Google Sheets.

MongoDB for Python for Developers

I’m taking the latest training course that just launched a couple of weeks ago from Michael Kennedy at Talk Python: MongoDB for Python for Developers. This is my first exposure to NoSQL. Over the last year, I’ve searched Google a few different times trying to understand what NoSQL without any success – it always went over my head. Within ten minutes of starting this course, I think I might understand what a document database is.

I took a break from coding the nflpool app a few months ago after my wife gave me some feedback on how I was designing the data model for the SQLite database I was using. I was pretty frustrated, not with her, but just my lack of knowledge. I still hadn’t figured out how to import the individual player pick’s from the Google Sheet I was using, though I did find some open source code that did it perfectly. The challenge was if I changed the data model, the import functionality was going to change significantly and I couldn’t figure it out.

Here it is, early July, and I feel the panic of not having the app built for the upcoming NFL season for the second year in a row. That’s ok – it’s a marathon, not a sprint, to learn Python and build the app. Everyone needs a hobby.

I’m going to create a new branch in nflpool and see if I can use MongoDB instead of SQLite. I need to sit down this weekend and give some thought and sketch out the data model for the Users collection, but I think it could (should?) work better than what I was planning. It’s already obvious that importing the the NFL statistics from MySportsFeeds via JSON directly into MongoDB should be a slam dunk.

The challenge in switching is twofold: First, I’ll need to understand how that changes the Python For Entrepreneurs course – as I’m going to use Pyramid for the web framework, I’ll need to understand how those will work together. This is especially true for the user accounts and database sections of the course.

The second risk is by switching to MongoDB from a SQL language, there will be no help available from my wife. I might drive her crazy with my questions and the way I ask them, but she has a lot of knowledge of SQL and it might be even more of challenge doing the database on my own, in addition to the Python.

I’m enjoying the MongoDB for Python for Developers course. To be fair, it’s definitely over my head – I’m not a real developer nor do I have any kind of database experience or know any Javascript, so I’m taking it slow and in chunks. I’m not coding the examples as I follow along yet – I’m going to audit the whole course, give some thought to confirm this is what I want to do, and then I’ll go through it again. It’s probably in my best interest to finish Python for Entrepreneurs and get the Pyramid web app up and running. I do enjoy Mr. Kennedy’s courses – the way the courses are structured, how each lecture builds on the others and his delivery makes them worth the money.  Even for some of the topics where I don’t have the prerequisite knowledge I probably should, I find myself learning.  I’m on vacation next week and plan to spend a good chunk of time going through both the Python for Entrepreneurs course and the new MongoDB course.

NFLPool 0.1 milestone completed

I followed through on my last blog post and made a lot of progress over the weekend – the best way to learn is by doing.  I’ve updated my roadmap for nflpool and broke the development of the nflpool app into chunks:

  • 0.1: Database creation complete – write the Python code and SQL statements to create all the needed database tables using sqlite3.  This includes using the requests module to import all players in the NFL into the database from MySportsFeeds.
  • 0.2: Import the 2016 statistics from MySportsFeeds into the database. This includes everything needed to calculate an NFLPool player’s score: individual player statistics, division standings, Wild Card seeds, etc.
  • 0.3: Scoring calculations are complete – the app works. The nflpool app can take every player’s picks, compare it to the final standings, and output everyone’s score for this past 2016 season.
  • 0.4: If 0.3 can calculate the final 2016 standings, 0.4 will add functionality to step through every week individually for 2016 from weeks 1 through 17. This will have to be different code as it won’t use the requests module to get real time data, it will use the JSON data I downloaded weekly last year. This will help me prepare for the 2017 season proving that it can calculate the score each week until the season ends.
  • 0.5: The nflpool app now lives on its website, nflpool.xyz. This will include an online form for the 2017 season where players can make their picks and these picks are inserted into the database. This will be built on Pyramid (after I complete the Python for Entrepreneurs course from Talk Python to do this.)
  • 1.0: Full nflpool.xyz integration. Players can browse by week for the current season and past seasons.

After this weekend, the 0.1 milestone is complete. I ran into a few challenges, but the database is complete and I even have cumulative NFL Player stats imported as part of the 0.2 milestone. The first challenge I ran into was I could not get the CSV file imported into the sqlite3 database. We originally used a Google Form to capture each player’s picks. I saved that in Google Docs as a CSV file to be imported. I kept getting a too many values to unpack error and no matter how many times I compared the CSV columns to the SQL statement – it was expecting 47 and no matter how many times I checked and re-checked, I couldn’t find my mistake. After doing some Google searches, I came across this Python script on Github to import a CSV into sqlite – and it worked!

The second challenge I ran into today. I realized after importing the player’s picks and the NFL Player statistics that I was using NFL Player names in the CSV file but I was using the player_id, an integer, from MySportsFeeds for the database. Using the player_id is the correct way to do this, but I needed to modify the CSV and re-import. No problem, but after doing this, I realized I would need to do the same thing again for the Team picks – I need to use the team_id not the team name.

This is all now done and I can move on to the 0.2 milestone. Starting with the five picks for individual stats (passing yards, rushing yards, receiving yards, sacks and interceptions – all already imported using requests!), I’ll write a function that will compare a player’s picks to if the NFL player finished in the top three of that category and assign the correct points. I’ll then add an if statement to see if the nflpool player made a unique pick in that category, and if so, double the points earned.

From there I’ll move on to all the other categories such as Division Standings or Points For and use the same logic.

This is huge progress. The point calculations will be the hardest part of the app (outside of building the website) and now it’s time to see how much Python I’ve learned.

Writing Python to Learn

I’ve spent a lot of time on my Python journey watching videos, reading a lot of articles, reading Reddit and listening to podcasts trying to learn from osmosis. But everyone says the best way to learn is to have something you want to build and get to writing it.

I took a week of vacation in mid-February with a goal of buckling down and writing some code. That didn’t happen. I spent half a day getting my environment set up in Fedora; a half day researching Postgresql vs. MySQL and then getting MySQL set up on my development machine and on my server; a day of actual vacation (yay!), a day taking the latest Talk Python course (helpful – and cool!) and then a day spent trying to learn and figure out how to get MySQL working – which I was never able to.

Looking back, I wish I would have captured what worked well or wasn’t working in my journal at a minimum, so I could turn that into blog posts, or just blogged. When I started this journey to learn Python and build my two apps, I had every intention of doing exactly that. Everyone who has a blog has an intention to write in it – and how many actually do?

I find when sit down to code, one of two things happens. If things are going well, I lose track of time, and next thing I know I have to run the kids to hockey or basketball or it’s time for me to go to bed and I don’t recap what I’ve done. The other is I throw up my hands in frustration because it’s not working and I walk away – also not capturing where I’m stuck or why I’m frustrated.

So here we are again and I’m going to try harder to chronicle my journey. I had a good night last night in just sitting down and reviewing the nflpool code I had started. I’ve gone back to using SQLite as the SQL I had written to create the database tables works – making it work with MySQL wasn’t happening and I was sick of losing time and using it as an excuse.  Considering that there are less than 20 people in each of the two leagues, I ‘m not worried about performance right now.  The SQLite code works and I need to make some progress.

Three things I accomplished last night:

  • I created two additional branches in Git. I have a scratchpad branch – this is all my original code from six months ago. It’s terrible. I wasn’t writing functions, it’s not well organized, etc. This was my playground to experiment in trying to put the pieces together. I don’t want to lose these files, so I’ll store them in their own branch, but they won’t be used again. I created a develop branch – this is where I’m doing all my active development. When things are working as they should be, I’ll do a pull request and merge them into master. I don’t know if this is the “right” workflow, but it will work for me.
  • I had three or four different Python scripts to create the tables in SQLite. I created one Python file to create all of the tables I’ll need and created a function for each table. I tweaked some of the columns in a few of the tables after reviewing my data model, realizing that some tables didn’t capture the year or season. I added a main method to call all of these functions. I then deleted the Python scripts that did this individually and merged these changes into master.
  • Lastly, and maybe most important, when I was done for the night, I grabbed my notebook and made a to-do list of what to work on next. For example: one of the tables imports some information needed for the NFL Teams (their city, abbreviation, etc.) This data never changes, but I was importing it from JSON data I downloaded from MySportsFeeds. This needs to be re-written to make a request to the MySportsFeeds API to get the data rather than loading a file into memory. (Just in case anyway ever wants to re-use this code to run the same pool – I don’t ever see that happening, but it’s best to do it right the first time). This way I know where to pick up when I start again and should reduce the time reviewing the code to figure out what to work on next.

Progress!

Talk Python Training: Consuming HTTP Services in Python Review

Summary / tl;dr: Consuming HTTP Services in Python is a great addition to the training courses from Talk Python and Michael Kennedy. You’ll come away with a thorough knowledge of the best way to get data from the internet using the requests module; you’ll use real world examples and APIs from Basecamp, Github and a custom API Michael built just from the course; Michael will explain and show the concepts in an easy to learn manner with a little humor and recap each concept to make sure you understand.

In addition to being host of the well known Talk Python podcast, Michael Kennedy has also created a number of Python training courses. The first, Python Jumpstart by Building 10 Apps, launched its Kickstarter exactly a year ago this month, and was quickly followed later in the year with Python for Entrepreneurs on Kickstarter and Write Pythonic Code Like a Seasoned Developer.

I started and finished Python Jumpstart by Building 10 Apps late last year and loved it. It was a very different learning experience than the University of Michigan’s Python for Everybody class on Coursera. There is an assumption with the Talk Python training courses that you have some basic understanding of computer science or programming. I don’t, so I typically go a little slower and take my time with the courses.

Looking back. there are a few things I liked about the Jumpstart by Building 10 Apps course and I was glad to see continue in this latest course:

  • Michael makes it very easy to follow along in the beginning of the courses. Everyone learns differently, but one of the ways I learn best is to follow along by typing the code as he does in the video, helping me commit it to memory.
  • After teaching you a core concept and coding it into one of the apps, Michael recaps what you’ve just learned in its own “Concept” video. This summarizes the concept you just put into practice and reinforces what you’ve learned.
  • Compared to some of the other online courses I’ve taken, I really like that I know I’m learning from someone well known in the community and I believe I’m not just learning how to code, but coding best practices. I don’t know if I’m explaining this right, but as an example: A few of the online classes I’ve taken haven’t had me put the code into functions and then call them in a main(): function, for example.
  • The source code to the examples Michael teaches you is on Github. You can download it, star it, fork it – but it’s available if you want to follow along, code along as the course goes, or just save it for reference for the future.

I’ve shared my enthusiasm for the Talk Python training courses here and on Twitter and when Michael reached out to me last week asking if I was interested in having a sneak peek at his latest course, Consuming HTTP Services in Python, I jumped at it (after making sure he knew I was still a novice early in my Python learning curve). I took a look at the course overview and this is right in my wheelhouse of what I need to learn. A core part of the app I want to build is exactly what this course is about – using the requests module to download at least a half dozen JSON feeds and then building my app around that. (My app is to build the scoring for a custom NFL Pool league – it’s not a fantasy league, it’s different. All of the data comes from MySportsFeeds, who provides sports data via JSON or XML which I will consume, store in a database, and then write a Python program to calculate the league and player scores to be displayed on the league website.)

What I really liked about this course was that it was focused on one thing: consuming services. I’ve taken a few different Python courses online as I try and learn Python, and most are throwing all the basics that you need to know – everything you’d expect in a beginner course, but it does get overwhelming. This was the first course I’ve taken that was focused on getting you really good at one thing, and in a few different ways that you might need to do it.

Immediately, I learn something new. I only knew of requests from I learned using Google and Stack Overflow. When I started playing around and putting together the building blocks of my app, I wrote the following code. MySportsFeed currently using HTTP Basic Authentication, so I have a separate file called secret.py that stores my username and password – I may be new to Python, but I’m smart enough to have created that, import it and add it to my .gitignore file!

This code polls the Playoff Team Standings feed on MySportsFeeds and then I have some (ugly) Python code that runs a for loop to rank each of the two NFL Conferences teams from 1 to 16.

response = requests.get(
    'https://www.mysportsfeeds.com/api/feed/pull/nfl/2016-2017-regular/playoff_team_standings.json?teamstats',
    auth=HTTPBasicAuth(secret.msf_username, secret.msf_pw))

rawdata = response.content
data = json.loads(rawdata.decode())

And what did I learn? As I tweeted last week:

Now my code looks like this:

response = requests.get(
    'https://www.mysportsfeeds.com/api/feed/pull/nfl/2016-2017-regular/playoff_team_standings.json?teamstats',
    auth=HTTPBasicAuth(secret.msf_username, secret.msf_pw))

data = response.json()

It’s not a lot, it’s just one line of code, but it’s these little things. I had no idea the power of requests – this is just one specific example of something I learned from this course. Another thing I learned? I should be taking the URL in the above eample, create a base_url variable and then append the feed name as another variable. This is covered in a later chapter of the course – Consuming RESTful HTTP services. This chapter has a ton of great examples I’m going to be referencing when writing my app and using.

The Consuming RESTful HTTP services chapter is where the course really starts to take off. I ran into this with the Jumpstart course as well – Michael does a great job in teaching you the building blocks and then the course seems to go from 0-60. This is where having previous programming experience is helpful as that jump from learning what each puzzle piece does to how you put the puzzle together clicks. For someone like me, without any programming experience, it’s a big jump, but possible.

With that said, this chapter is fantastic. While I had a cursory knowledge of HTTP commands like GET and PUT, the API Michael built for the course is awesome. You have the opportunity to create your own examples and interact with the API and blog explorer app – this isn’t something you see with most online courses out there.

I also learned that I only want to use requests, and not built-ins. Though I do now have an understanding of the urlib built-in for Python 3.x if I’m ever cornered and have to use it.

I will admit to skipping the chapter on SOAP. I’m a hobbyist, not an enterprise developer who may encounter SOAP. But it’s great this available for those who may need it as part of this course. This, combined with learning how to use JSON, XML, and screen scraping makes it a complete course.

The last chapter is on screen scraping. There are a ton of of tutorials and classes available on the web about screen scraping. I’ve taken a few of them – one of the challenges I have with my app is figuring out the playoff seeding and I thought about scraping NFL.com, but that’s a different story. This chapter kicks off with an example of using a site’s sitemap.xml – an example I’ve never seen before that makes so much sense once you learn about it. And if a website you want to scrape doesn’t have a sitemap.xml, shame on them for not being search engine friendly. But if they don’t, Michael goes through other ways to scrape a website using Beautiful Soup and does it in the most Pythonic way I’ve seen yet in a course.

I enjoyed Consuming HTTP Services in Python. With the requests module and JSON being a cornerstone of the app I hope to write, it was great to learn about everything I need to know to make that happen. Michael’s delivery is conversational and he makes it easy to follow along and do the code examples with him, if you choose to. If you have programming experience or are coming from a different language, the videos themselves will probably teach you what you need to do in Python. If you’re like me, a complete novice to Python, you’ll be able to follow along, but be prepared for the jump the course will make in the Consuming RESTful HTTP Services chapter – this moves pretty quickly, but if you’ve forked the Github repo you’ll have access to the program Michael has written and you can (and should) write your own examples to interact with the API on the blog explorer. For $39, you’re getting a well developed course from someone well known in the Python community teaching you the Pythonic way interact with services. While other online training sites might have “sales” that are cheaper, as someone new to Python who has taken some of those courses, trust me – the Talk Python courses are well worth the money.

I’m still early in my Python journey and the two courses I’ve finished from Talk Python have been the best learning resources I’ve used out of all the books and training I’ve purchased (and it’s a lot). I’m still working my way through Python for Entrepreneurs and am really looking forward to two of the upcoming courses using SQLAlchemy as this database stuff is way over my head right now. Thanks again to Michael for allowing me to have a preview of the Consuming HTTP Services course – now it’s time for me to take his advice from the last chapter of the course and write some code – the best way to actually learn.

Dwayne Crooks on learning Python efficiently

Dwayne Crooks wrote a fabulous blog post this week with his advice on learning Python efficiently.

Being a year into my journey, I couldn’t agree with him more. He lists five mistakes that hamper our ability to learn efficiently. Below I’ve listed his five mistakes with where I am in my journey in italics.

  1. Reading a book cover to cover. I strongly agree with this one. This was the first mistake I made a year ago when I decided I wanted to learn Python. I bought Think Python and Learning Python and quickly realized I am not the type of learner who can learn from reading and trying to follow along.
  2. Diving in without a plan. Check! Yes, I have a plan. I know what I want to build. Whew.
  3. Failing to narrow your scope. I think I’m ok on this one? Let’s just quote this one in full from Mr. Crooks:

    Having clear boundaries makes it easy to decide whether or not a new resource is worth your time. That’s why learning Python by trying to build something in it is a great way to go. You’d realize how much of Python you don’t need to know in order to accomplish any one task. You’ll find that the more you narrow your scope at the beginning, the more you’ll learn and the faster you’ll progress.

    The challenge for me in understanding this one, is if you’re new to Python, how do you know where to draw the boundaries? When I get stuck, I revisit some of the classes I’ve taken or search Stack Overflow. I quickly realize how much I don’t know when I find a new way to do something or come across something related but that I don’t need. But knowing what I want to build probably expands my scope instead of narrowing it.

    • Trying to learn 2 (or more) things at the same time. I’m being very careful with this one. I want to have a prototype of my application working before I move on to my next class, Python for Entrepreneurs, which will teach me how to build my application using Pyramid. The course will also cover CSS, Bootstrap and more web technologies. Where I’m struggling though is on my prototype – do I just build the prototype or do I try and learn some basic SQL, which is what the web app version will need? My head has been in the right spot on this one as I’ve tried to avoid learning SQL up until now.
    • Spending too much time studying before you have experience doing. Mr. Crooks hits this one on the head and is basically describing me: Because we’re afraid to fail, we want to know what we’re doing before we ever try. So we spend a lot of time learning before ever trying to apply any of it. I’m wired to be a “learner” and do a deep dive into anything before I pull the trigger. Whether it’s a ton of research before buying a new TV or learning a new skill, this describes me well. But I think I’m ok on this one. If you were to look through my Github repo for nflpool (please don’t), you would see a mishmash of Python. There’s probably 25 files in my repo that is basically just a scratchpad for me trying to figure out how to parse JSON or trying to write a for loop to get the results I need. There’s nothing Pythonic in there (yet). For example, I’m not using functions like I should. But once I get the different pieces working, I’ll refactor it the right way. You can argue whether I should be starting it right or not, but I’m diving in and trying to figure it out piece by piece. You have to start somewhere…

Mr. Crooks then goes on and shares his five steps to get started. I’m happy to see I’m on the right track.

One Year of Python

It was Black Friday of 2015. O’Reilly put on a sale of their programming ebooks and I was finally ready to take the plunge and learn Python. I bought three books:

I then signed up for a Coursera class, Python for Everybody, taught by Dr. Charles Severance and started the class. I was ready to do this. I needed a hobby. I had a problem to solve.

Then real life got in the way. A few months earlier, we started building a new house. In January it was time to sell our house, which meant hours of work. Then in February, we moved.

I put learning Python on the back burner. Before I knew it, it was July, and another six months had gone by. It was now fantasy football season and that was the problem I had to solve. I needed a program that would keep track of all football statistics and standings and automatically calculate each player’s points. It was time.

I re-started the Coursera course and spent the time. I was easily spending twenty hours a week reading the course materials, watching the videos and doing the homework.

I confirmed what I knew about myself: I learn best by doing, not just reading or watching videos. The books I had bought were helpful, but just sitting down and reading them, trying to follow along and do the exercises was difficult. Python for Everybody on Coursera was great.

I finished that and moved on to Python Jumpstart by Building 10 Apps by Michael Kennedy, which I had purchased in early 2016 via a Kickstarter campaign. I’m almost done with that a year after I started this journey.

Learning to code in Python is hard. I don’t have a background in computer science and with some of the concepts that the books and courses teach I just don’t have the base knowledge necessary. This sometimes makes it harder and takes longer to understand the concepts. I’m lucky that my wife has worked professionally as a programmer in multiple languages, including Java and SQL. But I drive her crazy when I ask her questions about concepts I clearly don’t understand. I use the wrong terminology or fail to grasp what I’ve been taught.

I don’t know how much I’ve retained from the classes and books. I’m trying to build my application in parallel with my learning. I’m convinced the only way I’m going to learn is to build something, which is a piece of advice most often found online for people aspiring to learn programming. I’m constantly hitting up Google and Stack Overflow when I get stuck. I’ll copy bits and pieces of code from these search results and I’m always doubting whether I understand what I’m copying. I’ve signed up for multiple newsletters and bookmarked dozens of websites with articles on how to learn, code snippets, programming challenges and more. I’m overwhelmed with the concepts I’m learning and I know I don’t understand, let alone use, these concepts.

But I’m going to keep trying. The only way I’ll learn is by building something. The code will be ugly. It will break. And I’ll keep updating it until it works and as I learn more, I’ll make it more elegant.

Here’s to another year.

Importing Team Data into NFLPool

Last weekend I discovered how to pretty print the five JSON files I get from MySportsFeeds. This was helpful to understand just how much data is nested within each file. I also spent a good chunk of the weekend writing in a notebook. I mostly did some data modeling on what each table in the database should store and what their primary keys would be. I also captured things I need to research and started breaking the project into chunks. As I tweeted out over the weekend:

Monday was a holiday so I did the first four courses of Python Jumpstart. I took a break and went back to the JSON files I had worked with. My goal was to build with what should be the easiest table and pull the team data out. This is a dictionary that includes the team name (Texans), city (Houston), abbreviation (HOU) and id (64). The ID number is supplied in the JSON feed and is unique, so I will use that as the primary key. There will be two more columns in the table for conference and division, but I wanted to deal with that later.

I wrote a for loop to try and pull out each team’s information. I quickly got stuck and nothing was working. At one point, the loop I had written worked, but only pulled out the data for the first ranked team. I showed my wife my code and she pointed out that it wasn’t iterating in a loop.

I was stuck for two nights working on this after dinner. I finally stepped back and modified my pretty print Python program and started breaking down all of the information in the JSON file again. I figured out what was a list and what was a dictionary and what was nested where. (It looks like I didn’t commit this to the git repo, oops! Will have to fix that.)

After doing this last night, I found the list I needed to work with. I then re-wrote my for loop and I was able to iterate through all 16 teams in the AFC:

for afc_team_list in teamlist:

afc_team_name = data["conferenceteamstandings"]["conference"][0]["teamentry"][x]["team"]["Name"]

afc_team_city = data["conferenceteamstandings"]["conference"][0]["teamentry"][x]["team"]["City"]

afc_team_id = data["conferenceteamstandings"]["conference"][0]["teamentry"][x]["team"]["ID"]

afc_team_abbr = data["conferenceteamstandings"]["conference"][0]["teamentry"][x]["team"]["Abbreviation"]

print((afc_team_name),",",(afc_team_city),",",(afc_team_id),",",(afc_team_abbr))

x = x + 1

I then copied and pasted and did it again for the NFC. I did try, unsuccessfully, to modify the conference list – “conference” – so I could just write one for loop instead of one for each of the two conferences. But it was working, so I’ll leave it for now. (I’m sure my code is ugly, but hey, I’m just starting).

After that it was all about writing the SQL insert statements to put this into a SQLite3 database. (For now, later it will go into MySQL). That took me a an hour, but at the end, I got it working and was even able to add the conference name to each row.

Next up, I need to take the data in the Division standings JSON file. In it is stored the division name for each division in a conference: AFC/AFC-East. I’ll need to write a for loop to grab it, slice it to remove the “AFC/“ and then stick that in the Division field for each team in the Teams table. I’ll also need to stop dropping and re-creating the table each time I insert data, but it’s working.

Progress!