For those who don’t know, Flask is a micro framework for making web sites. One day, I went to Flask’s website and I encountered a 404 when I was trying to download the documentation in zipped HTML form. From time to time, I feel it is really convenient to have offline documentation. Since, I feel it’s important to have documentation offline I went off to build the documentation from Flask’s tarball. Here are the steps to build Flask’s documentation in Arch Linux.
Create a Virtual Environment for Flask and activate it
$ virtualenv2 flask-env $ source flask-env/bin/activate
Install sphinx in that virtual environment
(flask-env)$ pip install sphinx
Install Flask in “Development Mode”
(flask-env)$ tar -zxvf Flask-0.10.1.tar.gz (flask-env)$ cd Flask-0.10.1 (flask-env)$ python setup.py develop
Make the Documentation
(flask-env)$ make -C docs html
Go to the Freshly Built Documentation
(flask-env)$ xdg-open docs/_build/html/index.html
I wanted to archive all the episodes of a video podcast. The podcast listed all the episodes in it’s own rss feed, but didn’t include the episode number in the filename. So, I wrote a quick python script that generates a bash script, which downloads the listed episodes. That python script also adds the episode number in each filename. I went with the approach creating a bash script, to make it easier to review each filename and what’s going to be downloaded. I need to review these things, because downloading a lot of files could take a lot of time and there is a risk of naming things the wrong way.
#!/usr/bin/env python3 import xml.etree.ElementTree as etree import math import sys if __name__ == '__main__': if len(sys.argv) != 2: print('Usage ' + sys.argv + ' location_of_downloaded_rss_file ') exit(1) tree = etree.parse(sys.argv) root = tree.getroot() channel = root urls =  current_episode_number = 0 total_episodes = 0 maximum_number_of_digits = 0 for item in channel.iter('item'): enclosure = item.find('enclosure') if enclosure != None: urls.append(enclosure.get('url')) total_episodes = len(urls) maximum_number_of_digits = int(math.log10(total_episodes))+1 print('#!/bin/bash\n') while len(urls) != 0: url = urls.pop() urlList = url.split('/') filename = urlList[-1] current_episode_number += 1 current_episode_number_padded = str(current_episode_number).zfill(maximum_number_of_digits) print("wget '" + url + "' -O '" + current_episode_number_padded + '_' + filename + "' ")
- Hmm…,”Change Your Font for Easier Proofreading“
- Prehistoric Penguin Fossil Taller Than Most Humans
- SpaceX Choosen a Site Near Brownsville, TX to Build a Private Spaceport
- 99% Invisible Episode 125: Duplitecture
- Yahoo Announces Plans to Offer End to End PGP Encryption
- An Observation About the Icons of Messaging Apps
I guess I’m not alone in that observation
I learned that it is possible to link to specific pages within a PDF file while looking at documentation on the Toastmasters website. The url in href attribute will use a fragment identifier. Here is an example url that will explain how to use that specific fragment identifier that identifies the page to go to:
Using this will be convenient for creating notes and other situations that involve referencing an individual page within a pdf file.
I use feed aggregators, because I read my news from multiple sources (e.g. NPR, The Verge, The Texas Tribune, KXAS, WFAA, and etc.) These feed aggregators give me the option to filter out things that I’m not interested in like stories that sound like a rehash of the police blotter. Recently Google has decided to discontinue their feed aggregator service, Google Reader.
I’m looking for alternatives. Before I used Google Reader I was using a desktop feed aggregator called RSSOwl. It supports Linux, Windows, and Mac OS X. I didn’t stop using RSSOwl, because of quality issues. I only stopped using RSSOwl, because my workflow changed and at that time RSSOwl didn’t offer Google Reader synchronization. Getting many portable devices over time led me to change my workflow. It became less convenient to just use one device to read the news.
There are ways to get many news source in one location and that is to use social networks like Facebook, Twitter and Google+. I do not want to use social networks as the only way get my news. It doesn’t have the functionality that I need. There isn’t adequate filtering. There isn’t a way of to mark posts as read. Not all news services provide a social network presence.
As times passes we’ll reach the shutdown date of July 1, 2013. At that time I may have to revisit my old workflow of using one device as my centralized location of getting the latest news.