5 interesting things (22/09/21)

Writing a Great CV for Your First Technical Role – a series of 3 parts about best practices, mistakes, and pitfalls in CV showing both good and bad examples. I find the posts relevant not just for first rolls but also as a good reminder when updating your CV.

https://naomikriger.medium.com/writing-a-great-cv-for-your-first-technical-role-part-1-75ffc372e54e

Patterns in confusing explanations – writing and technical writing are superpowers. Being able to communicate your ideas in a clear way that others can engage with is a very impactful skill. In this post, Julia Evans describes 13 patterns of bad explanation and accompanies that with positive examples.

https://jvns.ca/blog/confusing-explanations/

How We Design Our APIs at Slack – not only that I agree with those advices and had some bad experiences with similar issues both as a API supplier and consumer. Many times when big companies describe their architecture and process they are irrelevant to small companies due to cost, lack of data or resources or other reasons ,but the great thing about this post is that it also fits small companies and relatively easy to implement.

https://slack.engineering/how-we-design-our-apis-at-slack/

Python Anti-Pattern – this post describes a bug that is at the intersection of Python and AWS lambda functions. One can say that it is an extreme case but I tend to think it is more common than one would think and may spend hours debugging it. It is well written and very important to know if you are using lambda functions.

https://valinsky.me/articles/python-anti-pattern/

Architectural Decision Records – sharing knowledge is hard. Sometimes what is clear for you is not clear for others, sometimes it is not taken into account in the time estimation or takes longer than expected, other times you just want to move on and deliver. Having templates and conventions make it easier both for the writers and the readers. ADRs answer specific need.

https://adr.github.io/

pandas read_csv and missing values

I read Domino Lab post about “Data Exploration with Pandas Profiler and D-Tale” where they load diagnostic mammograms used in the diagnostic of breast cancer from UCI website. Instead of missiing values the data contains ?. When reading the data using pandas read_csv function naively interpret the value as string value and change the column type to be object instead of float in this case.

In the post mentioned above the authors dealt with the issue in the following way –

masses = masses.replace('?', np.NAN)
masses.loc[:,names[:-1]] = masses.loc[:,names[:-1]].apply(pd.to_numeric)

That is, they first replaced the ? values in all the columns with np.NAN and then convert all the columns to numeric. Let’s call this method the manual method.

If we know the know the non default missing values in advance, can we do something better? The answer is yes!

See code here

Use na_values parameter

df = pd.read_csv(url, names=names, na_values=["?"])

na_values parameter can get scalar, string, list-like or dict parameters. If you pass a scalar, string or list-like parameter all columns are treated the same way. If you pass dict you can specify different set of NaN values per column.

The advantage of this method over the manual method is that you don’t need to convert the columns after replacing the nan values. In the manual method the column types are specified (in the given case they are all numeric), if there are multiple columns types you need to know it and specify it in advance.

Side note – likewise, for non trivial boolean values you can use true_values and false_values parameters.

Use converters parameter

df = pd.read_csv(url, names=names, converters={"BI-RADS": lambda x: x if x!="?" else np.NAN})

This is usually used to convert values of specific columns. If you would like to convert values in all the columns in the same way this is not the preferred method since you will have to add an entry for each column and if new column is added you won’t take care of it by default (this can be both advantage and disadvantage). However, for other use-cases, converters can help with more complex conversions.

Note that the result here is different then the result in the other methods since we only converted the values in one column.

Conclusion

Pandas provides several ways to deal with non-trivial missing values. If you know the non-trivial value in advance you are good to go and na_values is most likely the best way to go.

Performance wise (time) all methods perform roughly the same for the given dataset but that can change as a function on the dataset size (columns and rows), row types, number of non-trivial missing values.

On top of it, make reading documentation your superpower. It can use your tools smarter and more efficient and it can save you a lot of time.

See pandas read_csv documentation here

5 tips for using Pandas

Recently, I worked closely with Pandas and found out a few things that are might common knowledge but were new to me and helped me write more efficient code in less time.


1. Don’t drop the na

Count the number of unique values including Na values.

Consider the following pandas DataFrame –

df = pd.DataFrame({"userId": list(range(5))*2 +[1, 2, 3],
                   "purchaseId": range(13),
                   "discountCode": [1, None]*5 + [2, 2, 2]})

Result

If I want to count the discount codes by type I might use –  df['discountCode'].value_counts() which yields – 

1.0    5
2.0    3

This will miss the purchases without discount codes. If I also care about those, I should do –

df['discountCode'].value_counts(dropna=False)

which yields –

NaN    5
1.0    5
2.0    3

This is also relevant for nuniqiue. For example, if I want to count the number of unique discount codes a user used – df.groupby("userId").agg(count=("discountCode", lambda x: x.nunique(dropna=False)))

See more here – https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.nunique.html

2. Margin on Row \ columns  only

 Following the above example, assume you want to know for each discount code which users used it and for each user which discount code she used. Additionally you want to know has many unique discount codes each user used and how many unique users used each code, you can use pivot table with margins argument –

df.pivot_table(index="userId", columns="discountCode",
               aggfunc="nunique", fill_value=0,
               margins=True)

Result –

It would be nice to have the option to get margins only for rows or only for columns. The dropna option does not act as expected – the na values are taken into account in the aggregation function but not added as a column or an index in the resulted Dataframe.

3. plotly backend


Pandas plotting capabilities is nice but you can go one step further and use plotly very easy by setting plotly as pandas plotting backend.  Just add the following line after importing pandas (no need to import plotly, you do need to install it) –

pd.options.plotting.backend = "plotly"

Note that plotly still don’t support all pandas plotting options (e.g subplots, hexbins) but I believe it will improve in the future. 


See more here – https://plotly.com/python/pandas-backend/


4. Categorical dtype and qcut

Categorical variables are common – e.g., gender, race, part of day, etc. They can be ordered (e.g part of day) or unordered (e.g gender). Using categorical data type one can validate data values better and compare them in case they are ordered (see user guide here). qcut allows us to customize binning for discrete and categorical data.

See documentation here and the post the caught my attention about it here – https://medium.com/datadriveninvestor/5-cool-advanced-pandas-techniques-for-data-scientists-c5a59ae0625d

5. tqdm integration


tqdm is a progress bar that wraps any Python iterable, you can also use to follow the progress of pandas apply functionality using progress_apply instead of apply (you need to initialize tqdm before by doing tqdm.pandas()).

See more here – https://github.com/tqdm/tqdm#pandas-integration

5 interesting things (04/12/2020)

How to set compensation using commonsense principles – yet another artwork by Erik Bernhardsson. I like his analytics approach and the way he models his ideas. His manifest regarding compensation systems (Good/bad compensation systems) is brilliant. I believe most of us agree with him while he put it into words. His modeling has some drawbacks that he is aware of. For example, assuming certainty in employee productivity, almost perfect knowledge of the market. Yet, it is totally worth your time.

https://erikbern.com/2020/06/08/how-to-set-compensation-using-commonsense-principles.html

7 Over Sampling techniques to handle Imbalanced Data – imbalanced data is a common real world scenario, specifically in healthcare where most of the patients don’t have a certain condition one is looking for. Over-sampling is a method to handle imbalanced data, this post describes several techniques to handle it. Interestingly, at least in this specific example, most of the techniques do not bring significant improvement. I would therefore compare several techniques and won’t just try one of them. 
https://towardsdatascience.com/7-over-sampling-techniques-to-handle-imbalanced-data-ec51c8db349f

This post uses a the following package which I didn’t know before (it would be great if it could become part of scikit-learn) – https://imbalanced-learn.readthedocs.io/en/stable/index.html

It would be nice to see a similar post fo downsampling techniques.


Python’s do’s and don’t do – very nicely and written with good examples – 
https://towardsdatascience.com/10-quick-and-clean-coding-hacks-in-python-1ccb16aa571b

Every Complex DataFrame Manipulation, Explained & Visualized Intuitively – can’t remember how pandas function work? great, you are not alone. You can use this guide to quickly remind you how melt, explode, pivot and others work.
https://medium.com/analytics-vidhya/every-dataframe-manipulation-explained-visualized-intuitively-dbeea7a5529e

Causal Inference that’s not A/B Testing: Theory & Practical Guide – Causality is often overlooked in the industry. Many times you developed a model that is “good enough” and move on. However, this might increase bias and lead to unfavourable results. This post suggests a hands-on approach to causality accompanied by code samples.

https://towardsdatascience.com/causal-inference-thats-not-a-b-testing-theory-practical-guide-f3c824ac9ed2

Plotly back to back horizontal bar chart

Yesterday I read Boris Gorelik post – Before and after: Alternatives to a radar chart (spider chart) and I also wanted to used this visualization but using Plotly.

So I created the following gist –

import numpy as np
import matplotlib.pyplot as plt
import plotly.graph_objects as go
women_pop = np.array([5., 30., 45., 22.])
men_pop = np.array( [5., 25., 50., 20.])
y = list(range(len(women_pop)))
fig = go.Figure(data=[
go.Bar(y=y, x=women_pop, orientation='h', name="women", base=0),
go.Bar(y=y, x=men_pop, orientation='h', name="men", base=0)
])
fig.update_layout(
barmode='stack',
title={'text': f"Men vs Women",
'x':0.5,
'xanchor': 'center'
})
fig.update_yaxes(
ticktext=['aa', 'bb', 'cc', 'dd'],
tickvals=y
)
fig.show()

5 interesting things (11/3/2020)

Never attribute to stupidity that which is adequately explained by opportunity cost – if you have a capacity for only 10 words, those are the 10 words you should take – “prioritization is the most value creating activity in any company”. One general, I really like Erik Bernhardsson writing and ideas, I find his posts insightful.

Causal inference tutorials thread – check this thread if you are interested in causal inference.
The (Real) 11 Reasons I Don’t Hire you – I follow Charity Major’s blog since I heard here in ״Distributed Matters” in 2015. Besides being a very talented engineer, she also writes about being a founder and manager. Hiring and is hard for both sides and although it is hard to believe it is not always personal and a candidate has to be a good match for the presented company, for the future company, to the team. The company would like to have exciting and challenging tasks for the candidate so she will be happy and grow in the company, And of course, we are all human and make mistakes from time to time. Read Charity’s post in order to examine the not hiring decision from the employer’s point of view.

The 22 Most-Used Python Packages in the World – an analysis of the most downloaded Python packages on PyPI over the past 365 days. Some of the results are surprising – I expected pip to be the most used package and it is only the fourth after urllib3, six and boto core, and requests to be ranked a bit higher. Some of the packages are internals we are not even aware of such as idna and pyasn1. Interesting reflection.

Time-based cross-validation – it might seem obvious when reading but there are few things to notice when training a model that has time dependency. This post also includes Python code to support the suggested solutions.

argparse choices

I saw this post in Medium regarding argparse, which suggests the following –

parser.add_argument("--model", help="choose model architecture from: vgg19 vgg16 alexnet", type=str)

I think the following variant is better –

parser.add_argument("--model", help="choose model architecture from: vgg19 vgg16 alexnet", type=str, choices=['vgg19', 'vgg16', 'alexnet'], default='alexnet'])

If an illegal parameter is given, for example –model vgg20, the desired behavior of almost every program is to throw an exception. This won’t happen in the first case. If the user mistypes the model name the script will use Alexnet pre-trained instead of throwing an exception (implemented later in the script). Using the choices argument will solve this. Adding default=’alexent’, takes care in the case where the user does not choose a model actively. For the example presented in the original post this is the desired behavior.

See more here –

Missing data in Python – 5 resources

Bonus – R-miss-tastic – theoretical background and resources which relate to R missing values package. I recommend the lecture notes.
https://rmisstastic.netlify.com/lectures/

Working with missing data in Pandas – pandas is the swiss knife of data scientists, Pandas allows dropping records with missing values, fill missing values, interpolation of missing data points, etc.

https://pandas.pydata.org/pandas-docs/stable/user_guide/missing_data.html

Missing data visualization – provides several levels and types of visualizations – per sample, per feature, features heat map and dendrogram in order to gain a better understanding of missing values in a dataset.

https://github.com/ResidentMario/missingno

FancyImput – Multivariate imputation and matrix completion algorithms implemented. This package was partially merged to scikit-learn. This package focus on viewing the data as a matrix and not a composition of columns, unfortunately, it is no longer actively maintained but maybe in the future.

https://github.com/iskandr/fancyimpute

Missingpy – scikit-learn consistent API for data imputation. Implements KNN imputation (also implemented in FancyImput) and Random Forest imputation (MissForest). Seems unmaintained.

https://github.com/epsilon-machine/missingpy

MDI – Missing Data Imputation Package – accompanying code to Missing Data Imputation for Supervised Learning (https://arxiv.org/abs/1610.09075)

https://github.com/rafaelvalle/MDI

3 follow-up notes on 3 python list comprehension tricks

I saw the following post about list comprehension tricks in Python. I really like python comprehension functionality – dict, set, list, I don’t discriminate. So 3 follow up notes about this post –

1. Set Comprehension

Beside dictionary and lists, comprehensions also work for sets –

{s for s in [1, 2, 1, 0]}
#set(0,1,2))
{s**2 for s in [1,2,1,0,-1]}
#set(0,1,4)

2. Filtering (and a glimpse to generators)

In order to filter a list, one can iterate over the list or generator, apply the filter function and output a list or can use the build-in filter function and receive a generator that is more efficient as described further in the original post.

words = ['deified', 'radar', 'guns']
palindromes = filter(lambda w: w==w[::-1], words)
list(palindromes)
#['deified', 'radar']

Additional nice to know the build-in function is the map function, that for example can yield the words’ lengths as generators – 

words = ['deified', 'radar', 'guns']
lengths = map(lambda w: len(w), words)
list(lengths)
#[7, 5, 4]

3. Generators

Another nice usage of generators is to create an infinite sequence – 


def infinite_sequence():

    num=0

    while True:

        yield num

        num+=1


gen = infinite_sequence()

next(gen)

#0

next(gen)

#1

next(gen)

#2

Generators can be piped, return multiple outputs, and more. I recommend this postto a better understand generators.