How To Capitalize All Words in a String in Python – The Easy way

I run into this medium post today – “How To Capitalize All Words in a String in Python“.

It explains how to convert “hello world” to “Hello World” but it does it the hard way. An easier solution would be to use title function.

x = "hello world"
print(x.title())
#Hello World

See more documentation here

argparse choices

I saw this post in Medium regarding argparse, which suggests the following –

parser.add_argument("--model", help="choose model architecture from: vgg19 vgg16 alexnet", type=str)

I think the following variant is better –

parser.add_argument("--model", help="choose model architecture from: vgg19 vgg16 alexnet", type=str, choices=['vgg19', 'vgg16', 'alexnet'], default='alexnet'])

If an illegal parameter is given, for example –model vgg20, the desired behavior of almost every program is to throw an exception. This won’t happen in the first case. If the user mistypes the model name the script will use Alexnet pre-trained instead of throwing an exception (implemented later in the script). Using the choices argument will solve this. Adding default=’alexent’, takes care in the case where the user does not choose a model actively. For the example presented in the original post this is the desired behavior.

See more here –

Missing data in Python – 5 resources

Bonus – R-miss-tastic – theoretical background and resources which relate to R missing values package. I recommend the lecture notes.
https://rmisstastic.netlify.com/lectures/

Working with missing data in Pandas – pandas is the swiss knife of data scientists, Pandas allows dropping records with missing values, fill missing values, interpolation of missing data points, etc.

https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html

Missing data visualization – provides several levels and types of visualizations – per sample, per feature, features heat map and dendrogram in order to gain a better understanding of missing values in a dataset.

https://github.com/ResidentMario/missingno

FancyImput – Multivariate imputation and matrix completion algorithms implemented. This package was partially merged to scikit-learn. This package focus on viewing the data as a matrix and not a composition of columns, unfortunately, it is no longer actively maintained but maybe in the future.

https://github.com/iskandr/fancyimpute

Missingpy – scikit-learn consistent API for data imputation. Implements KNN imputation (also implemented in FancyImput) and Random Forest imputation (MissForest). Seems unmaintained.

https://github.com/epsilon-machine/missingpy

MDI – Missing Data Imputation Package – accompanying code to Missing Data Imputation for Supervised Learning (https://arxiv.org/abs/1610.09075)

https://github.com/rafaelvalle/MDI

3 follow-up notes on 3 python list comprehension tricks

I saw the following post about list comprehension tricks in Python. I really like python comprehension functionality – dict, set, list, I don’t discriminate. So 3 follow up notes about this post –

1. Set Comprehension

Beside dictionary and lists, comprehensions also work for sets –

{s for s in [1, 2, 1, 0]}
#set(0,1,2))
{s**2 for s in [1,2,1,0,-1]}
#set(0,1,4)

2. Filtering (and a glimpse to generators)

In order to filter a list, one can iterate over the list or generator, apply the filter function and output a list or can use the build-in filter function and receive a generator that is more efficient as described further in the original post.

words = ['deified', 'radar', 'guns']
palindromes = filter(lambda w: w==w[::-1], words)
list(palindromes)
#['deified', 'radar']

Additional nice to know the build-in function is the map function, that for example can yield the words’ lengths as generators – 

words = ['deified', 'radar', 'guns']
lengths = map(lambda w: len(w), words)
list(lengths)
#[7, 5, 4]

3. Generators

Another nice usage of generators is to create an infinite sequence – 


def infinite_sequence():

    num=0

    while True:

        yield num

        num+=1


gen = infinite_sequence()

next(gen)

#0

next(gen)

#1

next(gen)

#2

Generators can be piped, return multiple outputs, and more. I recommend this postto a better understand generators.

 

3 interesting features of NetworkX

NetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks.”

NetworkX lets the user create a graph and then study it. For example – find the shortest path between nodes, find node degree, find the maximal clique, find coloring of a graph and so on. In this post, I’ll present a few features I find interesting and are maybe less known.

Multigraphs

Multigraph is a graph that can store multiedges. Multiedges are multiple edges between two nodes (it is different from hypergraph where an edge can connect any number of nodes and no just two). NetworkX has 4 graph types – the well-known commonly used directed and undirected graph and 2 multigraphs –  nx.MultiDiGraph for directed multigraph and nx.MultiGraph for undirected multigraph.

In the example below, we see that if the graph type is not defined correctly, functionalities such as degree calculation may yield the wrong value –

import networkx as nx</pre>
G = nx.MultiGraph() G.add_nodes_from([1, 2, 3]) G.add_edges_from([(1, 2), (1, 3), (1, 2)]) print(G.degree()) #[(1, 3), (2, 2), (3, 1)] H = nx.Graph() H.add_nodes_from([1, 2, 3]) H.add_edges_from([(1, 2), (1, 3), (1, 2)]) print(H.degree()) #[(1, 2), (2, 1), (3, 1)] 

Create a graph from pandas dataframe

Pandas is the swiss knife of every data scientist, so naturally, it would be a good idea to create a graph from pandas dataframe. The other way around is also possible. See the documentation here. The example below shows how to create a multigraph from a pandas dataframe where each edge has a weight property.

import pandas as pd</pre>
df = pd.DataFrame([[1, 1, 4], [2, 1, 5], [3, 2, 6], [1, 1, 3]], columns=['source', 'destination', 'weight']) print(df) # source destination weight # 0 1 1 4 # 1 2 1 5 # 2 3 2 6 # 3 1 1 3 G = nx.from_pandas_edgelist(df, 'source', 'destination', ['weight'], create_using=nx.MultiGraph) print(nx.info(G)) # Name: # Type: MultiGraph # Number of nodes: 3 # Number of edges: 4 # Average degree: 2.6667 

Graph generators

One of the features I find the most interesting and powerful. The graph generator interface allows creating several types we just one line of code. Some of the graphs are deterministic given a parameter (e.g complete graph of k nodes) while some are random (e.g. binomial graph). Below are a few examples of deterministic graphs and random graphs. The examples below are the tip of the iceberg of the graph generator capabilities.

Complete graph – creates a graph with n nodes and an edge between every two nodes.

Empty graph – creates a graph with n nodes and no edges.

Star graph – create a graph with one central node connected to n external nodes.

G = nx.complete_graph(n=9)
print(len(G.edges()), len(G.nodes()))
# 36 9
H = nx.complete_graph(n=9, create_using=nx.DiGraph)
print(len(H.edges()), len(H.nodes()))
# 72 9
J = nx.empty_graph(n=9)
print(len(J.edges()), len(J.nodes()))
# 0 9
K = nx.star_graph(n=9)
print(len(K.edges()), len(K.nodes()))
# 9 10

Binomial Graph – create a graph with n nodes and each edge is created with probability p (alias for gnp_random_graph and erdos_renyi_graph).

G1 = nx.binomial_graph(n=9, p=0.5, seed=1)
G2 = nx.binomial_graph(n=9, p=0.5, seed=1)
G3 = nx.binomial_graph(n=9, p=0.5)
print(G1.edges()==G2.edges(), G1.edges()==G3.edges())
# True False

Random regular graph – creates a graph with n nodes, edges are created randomly and each node has degree d.

G = nx.random_regular_graph(d=4, n=10)
nx.draw(G)
plt.show()

Random regula graph

Random tree – create a uniformly random tree of n nodes.

G = nx.random_tree(n=10)
nx.draw(G)
plt.show()

random_tree

All the code in this post can be found here

Additional Resource

Official site

SO questions

https://www.datacamp.com/community/tutorials/networkx-python-graph-tutorial

https://www.geeksforgeeks.org/directed-graphs-multigraphs-and-visualization-in-networkx/amp/

Running my first EMR – lessons learned

Today I was trying to run my first EMR, here are few lessons I learned during this day. I have previously run hadoop streaming mapreduce so I was familiar with the mapreduce state of mind. However, I was not familiar with the EMR environment.

I used boto – Amazon official python interface.

1. AMI version – default AMI version is 1.0.0 – first release. This means the following specifications –

Operating system: Debian 5.0 (Lenny)

Applications: Hadoop 0.20 and 0.18 (default); Hive 0.5, 0.7 (default), 0.7.1; Pig 0.3 (on Hadoop 0.18), 0.6 (on Hadoop 0.20)

Languages: Perl 5.10.0, PHP 5.2.6, Python 2.5.2, R 2.7.1, Ruby 1.8.7

File system: ext3 for root and ephemeral

Kernel: Red Hat

For me Python 2.5.2 means –
  • Does not include json – new in version 2.6.
  • collections is new in python 2.4, but not all the models were added in this version –
namedtuple() factory function for creating tuple subclasses with named fields

New in version 2.6.

deque list-like container with fast appends and pops on either end

New in version 2.4.

Counter dict subclass for counting hashable objects

New in version 2.7.

OrderedDict dict subclass that remembers the order entries were added

New in version 2.7.

defaultdict dict subclass that calls a factory function to supply missing values

New in version 2.5.

 
Therefore specifying the ami_version version can be critical. Version 2.2.0 worked fine for me.
2. Must process all the input!
Naturally we will want to process all the input. However, for testing I went over only the n-first lines and then added a break to make things run faster. I was not consuming all the lines and therefore got an error. More about it here –
3. Output folder must not exists. This is the same as in hadoop streaming map reduce, for me the way to avoid it was to add a timestamp –

output="s3n://<my-bucket>/output/"+str(int(time.time()))

4. Why my process failed – one option which produces are relatively understandable explanation is  – conn.describe_jobflow(jobid).laststatechangereason

5. cache_files – enables you to import files you need for the map reduce process. Super important to “specify a fragment”, i.e. specify the local file name

cache_files=['s3n://<file-location>/<file-name>#<local-file-name>']
Otherwise you will obtain the following error –
“Streaming cacheFile and cacheArchive must specify a fragment”
6. Status – the are 7 different status your flow may have – COMPLETED, FAILED, TERMINATED, RUNNING, SHUTTING_DOWN, STARTING and WAITING. The right order of statuses if everything goes well is STARTING -> RUNNING ->SHUTTING_DOWN -> COMPLETED.
The SHUTTING_DOWN may take a while even for a very simple flow I measured about 1 minute of SHUTTING_DOWN process.
Resources I used –

http://boto.readthedocs.org/en/latest/emr_tut.html

setdefault vs get vs defaultdict

You have a python dictionary, you want to get the value of specific key in the dictionary, so far so good, right?

And then a KeyError –

Traceback (most recent call last):
File “<stdin>”, line 1, in <module>
KeyError: 1 

Hmmm, well if this key does not exist in the dictionary I can use some default value like None, 10, empty string. What’s my options of doing so?

I can think of 3 –

  • get method
  • setdefault method
  • defaultdict data structure
    get method

Let’s investigate first –

key, value = "key", "value"
data = {}
x = data.get(key,value)
print x, data #value {}
data= {}
x = data.setdefault(key,value)
print x, data #value {'key': 'value'}

Well, we get almost the same result, x obtains the same value and in get data is not changed while in setdefault data changes. When does it become a problem?

key, value = "key", "value"
data = {}
x = data.get(key,[])append(value)
print x, data #None {}
data= {}
x = data.setdefault(key,[]).append(value)
print x, data None {'key': ['value']}

So, when we are dealing with mutable data types the difference is clearer and error prone.

When to use each? mainly depends on the content of your dictionary and its’ size.

We can time the differences but it does not really matter as they produce different output and it was not significant for any direction anyhow.

And for defaultdict –

from collections import defaultdict
data = defaultdict(list)
print data[key] #[]
data[key].append(value)
print data[key] #['value']

setdefault sets the default value to a specific key we access to while defaultdict is the type of the data variable and set this default value to every key we access to.

So, if we get roughly the same result I timed the processes for several dictionary sizes (left most column) and run each 1000 times (code below) –

dict size default value method time
100 list setdefault 0.0229508876801
defaultdict 0.0204179286957
set setdefault 0.0209970474243
defaultdict 0.0194549560547
int setdefault 0.0236239433289
defaultdict 0.0225579738617
string setdefault 0.020693063736
defaultdict 0.0240340232849
10000 list setdefault 2.09283614159
defaultdict 2.31266093254
set setdefault 2.12825512886
defaultdict 3.43549799919
int setdefault 2.04997992516
defaultdict 1.87312483788
“” setdefault 2.05423784256
defaultdict 1.93679213524
100000 list setdefault 22.4799249172
defaultdict 29.7850298882
set setdefault 23.5321040154
defaultdict 41.7523541451
int setdefault 26.6693091393
defaultdict 23.1293339729
string setdefault 26.4119689465
defaultdict 23.6694099903

Conclusions and summary –

  • Working with sets is almost always more expensive time-wise than working with lists
  • As the dictionary size grows simple types – string and int perform better with defaultdict then with setdefault while set and list perform worse.
  • Main conclusion – choosing between defaultdict and setdefault also mainly depends in the type of the default value.
  • In this test I tested a particular use case – accessing each key twice. Different use cases \ distributions such as assignment, accessing to the same key over and over again, etc. may have different properties.
  • There is no firm conclusion here just investigating some of interpreter capabilities.

Code –

import timeit
from collections import defaultdict
from itertools import product

def measure_setdefault(n, defaultvalue):
 data = {}
 for i in xrange(0,n):
 x = data.setdefault(i,defaultvalue)
 for i in xrange(0,n):
 x = data.setdefault(i,defaultvalue)

def measure_defaultdict(n,defaultvalue):
 data = defaultdict(type(defaultvalue))
 for i in xrange(0,n):
 x = data[i]
 for i in xrange(0,n):
 x = data[i]

if __name__ == '__main__':
 import timeit
 number = 1000
 dict_sizes = [100,10000, 100000]
 defaultvalues = [[], 0, "", set()]
 for dict_size, defaultvalue in product(dict_sizes, defaultvalues):
 print "dict_size: ", dict_size, " defaultvalue: ", type(defaultvalue)
 print "\tsetdefault:", timeit.timeit("measure_setdefault(dict_size, defaultvalue)", setup="from __main__ import measure_setdefault, dict_size, defaultvalue", number=number)
 print "\\tdefaultdict:", timeit.timeit("measure_defaultdict(dict_size, defaultvalue)", setup="from __main__ import measure_defaultdict, dict_size, defaultvalue", number=number)