5 ways to follow publications in your field

This post was published on Medium


An important part of the data scientists and researchers’ life is to keep track of publications in their field. Depends on your field and needs publications range from papers in academic conferences and proceedings (some of them you can find as youtube videos), new technology and code packages, blog posts, etc. This post focus on who to keep track of academic research and innovation.

  1. Follow the relevant conferences, journals — make sure you are familiar with the main conferences in your field (ee.g.List of Machine Learning and Deep Learning conferences in 2019 / 2020) and follow their publications. You can usually read the accepted papers in the conference website when the paper acceptance is published. Talk slides and videos are usually accessible a short while after the conference. Identifying the relevant conferences may require some initial effort but once you identified it, it is easy to get it going.
  2. Google scholar e-mail alerts — track authors and \ or keywords you find relevant for you. E.g if you are interested in causal inference you would probably want to follow Judea Pearl. You can track new articles, citations and new articles related by author or keywords. I prefer to track only new articles because I found the benefit from citations and related articles low. You can also get email alerts by more complex queries. Set your alerts here.
  3. arXiv E-Mail Alerting Service — arXiv provides a daily digest of new submissions by subject, it is less granular and less focused than google scholar but can give you access to the newest, hottest submissions. Subscribe to arXiv E-Mail Alerting Service here).
  4. Follow blogs and publications of companies and research institutes which interest you — those are usually softer publications that give you a taste of the company’s recent advances and research. If this lights up your imagination, move on to reading the full paper. Examples of such blogs — facebook research blog, OpenAI blog, Google AI blog.
  5. Social media — follow researchers which are relevant to your field in twitter, see the papers they publish and recommend, read the discussions they are involved in. Join facebook groups that discuss the topics you are interested in.

Now, you can lean back and enjoy the new ideas coming to you. The next challenge is to wisely invest your time and to pick the papers which will be most beneficial for you.

Advertisements

Junior Data Science — Choosing your first job

This post was publish on Medium


While there are many people who would like to become a data scientist and are looking for their first position, junior data science positions are rare. Data science positions range from very research oriented positions in companies which also publish in scientific conferences (quite rare) to positions which are more hands-on and involve lots of coding. (Junior) Data scientists also come from diverse backgrounds: recent grads (bsc, msc and PhDs in different fields), experienced developers which would like to learn new skills, retraining and so on.

While the junior data science positions are rare, it is important to make an accurate choice and avoid common pitfalls. This post was triggered by Ori Cohen’s post “Data-Science Recruitment — Why You May Be Doing It Wrong” which was oriented to the recruiting side. This post is for the data scientist who is looking for their first job. Here are few insights.

Don’t be the first data scientist in the company

This sounds like a very sexy position — you recently graduated from the university and you were able to impress a small startup with your skills. They offer you to be the first data scientist in the company, boom! You will be able to shape the methods, process and tools the right way, like you always envisioned!

״In theory, theory and practice are the same. In practice, they are not״(Benjamin Brewster).

Many practical tasks are not like in the textbook or in Andrew Ng’s course. You will most probably need guidance and advice from an experienced data scientist who already made her mistakes, is familiar with the data and with the product’s constraints and is simply more experienced. The skills you want to learn varies over time but it always a good idea always have someone around that you can learn from.

An additional issue is that small companies usually have little data, usually not enough to train models, and the data quality might also be an issue. This will require changes in the product which should be defined and implemented. As a junior data scientist it might be complicated to do both the technical part and the politics which is required for such a change.

How would you know you are interviewing for the first data science position:

  1. You will be told so explicitly — “you will be our first data scientist”
  2. None of your interviewers is a data scientist and the questions they ask don’t reflect a deep understanding of the topic.

People Don’t Quit Jobs — They Quit Bosses

And before quitting — people work for bosses.

Interviews are two-sided. The company interviews you, but you also interview the company. Does the product excites you? Do you think the company has the right values and culture fit for you? Would you like to work for this manager?

Most likely you will work closely with your manager and teammates. Did they impress you? Would you value their feedback?

In order to learn and improve, a lot of feedback and communication is required, especially when you are in a junior position. Are there regular 1:1s? Is there an on boarding plan? Do they participate in conferences \ is there an education budget? Does the company have the work-life-balance you are looking for?

During an interview, the interviewer might want to please you so if you’ll ask these questions directly they might answer what you expect to hear. Talking with teammates and other co-workers in the company can give you additional insights about the team and the company.

Tools and Technologies

If you mainly focus on research you might find this point secondary. However, for your next position, hands on experience may be required. Be sure to choose a place which uses reasonable technologies and not a niche, esoteric technologies. E.g using assembly for machine learning, working in mainframe environment, etc.

Current reasonable technology stack for data scientist includes : python (maybe scala, maybe R depends on your risk aversion) and scientific python packages (pandas, numpy, scipy, etc), cloud environment, some kind of database (postgres \ mysql \ elasticsearch \ mongodb).

Last but not least — choose something you are passionate about so you will be happy to go to work in the morning and dream about your code at night 🙂


Special thanks to Liad Pollak and Idit Cohen who made this text readable

5 interesting things (23/07/2019)

Five Talks from spaCy-IRL Worth Watching – great summarisation of 5 talks from spaCy-IRL conference which took place in Berlin in the beginning of July. The summarisations are very exact – not too deep, not too shallow and makes you want to watch the talks. From the meta perspective – a very nice connection between academia and industry leveraging ideas from academia to solve industry problems.

 

King – Man + Woman = King ? In 2016 “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings” was published and showed that the pre-trained word2vec model which was trained on Google News articles exhibited gender stereotypes to, “a disturbing extent.”. Apparently, according to “Fair is Better than Sensational:Man is to Doctor as Woman is to Doctor” at least some of the bias stems from optimisations \ restrictions done in order to present better results. Most significant one the answer to “a to b is like c to ..?” cannot be b. This does not mean that there is no bias, it only means that it was not measured and formalised correctly. This emphasises once again the need to understand the algorithms we use and their limitations.

 

 

Bonus – linear digression episode – Revisiting Biased Word Embeddings

 

10 tips for code review – code review can be a stressful task for both the reviewer and the person her work is being reviewed. This post is from the reviewer point of view, how to make this process more efficient and constructive to both sides. A good follow up post would be how to listen and reach to code review. From my experience, many times it is a boiling point for relationships inside teams and can break teams when not done correctly.

 

 

How to label data – if you ever did a data science project you know that obtaining tagged data is a real hassle. You often discover that you don’t have enough data, the tagging is not what you need, etc. This guide will help you avoid pitfalls when issuing a labelling project.

 

 

Data-Science Recruitment — Why You May Be Doing It Wrong – post by data science team lead in Zencity regarding do’s and don’t do in the interviewing process for data scientists. In the last few years I widnessed many of this flaws – asking non relevant riddles, given a very long home exercise, not well defined with doubtful data. I would like to emphasise for candidates, specially junior candidates, that if  you have doubts during the interview process consider looking for another place.

 

5 interesting things (26/06/2019)

Checklist for debugging neural networks – well written trouble shooting for neural networks models which is not language or framework specific!
https://towardsdatascience.com/checklist-for-debugging-neural-networks-d8b2a9434f21

Why Software Projects Take Longer Than You Think A Statistical Model – great post about a problem we all face. Usually we try to solve it using “instrumental changes” – changing methods \ processes \ … . This post tries to show that there is more to it than just the behavioural change.

Google What-If-Tool (WIT) – A nice tool by Google that was released few month ago. The terminology is actually a bit misleading and counterfactuals don’t carry the meaning they have in causal inference. It is more like matching with two possible distance matrices – L1 and L2.

causallib – New python causal inference package from IBM
There is also a python causal inference package from Microsoft which was released about a year ago – https://github.com/Microsoft/dowhy.

A Visual Intro to NumPy and Data Representation – What can I say, I really like Jay’s guides –

4 insights from BDHW19

This week I attended BDHW19 – Big data in Health Care which was hosted by the Weizmann Institute of Science in collaboration with Nature Medicine. The conference had a great line up of speakers – leading researchers in the field from academia, industry and HMO’s.
There were few ideas and themes that were mentioned several times from different angles and I would highlight few of them.
(all sessions were recorded and I’ll add a link once they are online)

EHR data in Israel – By law, every Israeli resident must be registered with one of the HMO’s. The HMO’s in Israel are in a special position were the where they are run as non-for-profit organizations and are prohibited by law from denying any Israeli resident membership. Israelies HMO’s hold EHR data from the mid nineties which means that the biggest HMO (Clalit) have longitudinal data of over 20 years for 4.5m heterogenic patients. Together with greater researchers and collaboration with the academia this enables amazing research which hopefully later propagates and influence our daily life.
The Israeli AI Healthcare Startup Landscape of 2018 – https://www.startuphub.ai/israeli-ai-healthcare-startups-2018

Deployment of HC models – while there are great result and tools developed on research the way to deploy those models, use the new ideas is long and contains many obstacles. Only very few models really turned into health care products – alert system, treatment guide lines, bio markers, personalized medicine, etc. Few caveats in the way are interpretability, robust machine learning and causality. We must keep in mind that eventually our research should affect the end users – clinicians, patients, etc.

More on this –
Suchi Saria – “Tutorial : Safe and Reliable Machine Learning” from FAT* 2019.
Ziad Obermeyer – “Using machine learning to understand and improve physician decision making”.

Collaboration – there are many efforts done in the field by many parties and in order to get good result and to move from journals to the field we need to cooperate. We need to ask the right questions and design good RCT or emulate them correctly. We need high quality data (or at least be aware to the quality of our data) so biobanks and dataset owners and researchers need to cooperate in order to get the most of the data. In order to see if our models generalizes well we should run them on different datasets. In order to see that our models make sense from medical perspective clinicians must be part of the process. We need everyone on board.

More on this –
Rachel Ramoni – “Mine is Big ? Ours is Bigger: Million Veteran Program and the Case for Coordinated Collaboration”
Nigam Shah – “Good machine learning for better healthcare”. See also Clinical Informatics Consult.

Causality – The C word. Causal graph, counterfactuals, confounders, treatment effect.. It was present almost in every talk implicitly or explicitly.  Naturally some studies are more causal by nature such as “which drug is better”, “do X cause Y” and some need to take into account causal mechanisms, identify confounding, etc. There is a shift from prediction tasks to causal tasks.
One key insight from Hernan’s tutorial – we don’t compare treatments, we compare strategies. I.e, studies in this field should move from comparing point interventions to comparing sustained treatment strategies. Moving to treatment strategies we should to be aware to treatment confounder loop.

More on this –
Uri Shalit – “Predicting individual-level treatment effects in patients: challenges and proposed best practices”.
Miguel Hernan – “How do we learn what works? A two-step algorithm for causal inference from healthcare data” and tutorial “Comparative Effectiveness of Dynamic Treatment Strategies: The renaissance of the g-formula”.

5 interesting things (17/01/2019)

How to Grow Neat Software Architecture out of Jupyter Notebooks – jupyter notebooks is a very common tool used by data scientist. However, the gap between this code to production or to reusing it is sometimes big. How can we over come this gap? See some ideas in this post.

https://github.com/guillaume-chevalier/How-to-Grow-Neat-Software-Architecture-out-of-Jupyter-Notebooks

High-performance medicine: the convergence of human and artificial intelligence – a very extensive survey of machine learning use cases in healthcare.

https://www.nature.com/articles/s41591-018-0300-7

New Method for Compressing Neural Networks Better Preserves Accuracy – a paper by Amazon Alexa team (mainly). Deep learning models can be huge and the incentive of compressing them is clear, this paper show how to compress the networks while not reducing the accuracy too much (1% vs 3.5% of previous works). This is mainly achieved by compressing the embedding matrix using SVD.

https://developer.amazon.com/blogs/alexa/post/a7bb4a16-c86b-4019-b3f9-b0d663b87d30/new-method-for-compressing-neural-networks-better-preserves-accuracy

Translating Between Statistics and Machine Learning – different paradigms sometimes use different terminology for the same ideas. This guide tries to bridge the terminology gap between statistics and machine learning.

https://insights.sei.cmu.edu/sei_blog/2018/11/translating-between-statistics-and-machine-learning.html

Postmake – “A directory of the best tools and resources for your projects”. I’m not sure how best is defined but samplig few categories it seems good (e.g. development categorty is pretty messy including github, elasticsearch and sublime together). I liked the website design and the trajectory. I do miss some category of task managment ( couldn’t find Jira and any.do is not really a calender). It is at least good resource for inspiration.

https://postmake.io