AI By The Bay



Understanding and defining AI:

from Technology to Strategy


March 6−8, 2017 in San Francisco

AI By the Bay is a technical conference defining AI in the context of startups and enterprises.
We start with a rigorous introduction to AI by the hands-on industry leaders, live-coding the major open-source AI implementations to highlight the key algorithms, data pipelines, and application context.
We dig deep into autonomous transportation as an industry deploying Deep Learning at scale and transforming most rapidly.
We end with a groundbreaking day of AI strategy defined by the full-stack AI leaders who both build AI and define its strategy and future.

Bradford Cross, DCVC, will not invest in your AI startup. Here’s why

A founding partner at DCVC, a seed and Series A venture capital firm whose portfolio includes Buoyant, Mesosphere, Circle CI, Mattermark, Gusto, Kaggle, Feedzai and many others, Bradford Cross invests in machine learning and AI-driven companies. Speaking at AI By The Bay about Machine Learning Startups that he previously founded and ran, Bradford is also part of AI VC panel on March 8. In advance of his appearance at AI By The Bay in San Francisco next week, Bradford has shared with us what still fascinates him about machine learning and AI and why he won’t invest in your AI startup.



As a founder of two machine learning startups, what attracts you most in machine learning and AI?

BC: I’ve actually founded many AI companies beginning in 2002. First I founded two hedge funds, then Flightcaster and Prismatic, and now more as part of a bigger project that I can’t discuss at the moment.

I’ve always loved to build systems. As a kid I used to get remote control cars whenever possible, so that i could take them apart and build strange new things of questionable utility. Growing up, I was programming, building robots, and reading about space. I Was just your typical science and math kid with a penchant for dreaming big and tinkering.

AI is the ultimate domain for systems tinkerers. I think it is the most important field of our lifetimes both because the AI systems we’re building will have a big impact on their own, and because of how AI levers-up every other technical field. We will continue to deliver superior results to existing problems, and open up solutions to totally new problems that can’t even be envisioned without AI.

What’s not to be attracted to?!

Is there something in machine learning and AI that still fascinates you?

BC: I love this work and never cease to be fascinated every day. The list of topics i want to learn about is constantly growing. Despite the volume of reading I do, my reading list feels like it has somehow been monotonically increasing since 1999.

AI is such a broad and deep area across all the math, prob, stats, and CS that you can dig into. Then of course there are all the applications, and fields like NLP and Computer Vision.

I personally get most excited about interesting new applications that compose many datasets and methods – using things like NLP and Computer Vision to extract features from text and images, then join those data with other datasets to tackle new prediction, classification, and ranking problems.

What’s not to be fascinated about?!

As an investor at DCVC, has the way you look at machine learning and AI-powered startups changed in comparison to when you were a part of the startups you founded?

BC: Yes, since I was very early in this new generation of AI startups (2008) and investing in them via DCVC (2011), I have seen a lot of data and have a fair bit of pattern matching. The most important things to weed out are 1) hammers looking for nails – technical teams that are working forward from tech they think is interesting instead of working backward from a market need, 2) AAI (artificial artificial intelligence) startups that are riding the mania and just dropping in buzzwords.

Ultimately I want to see a strong market need that requires the core product value to be powered by AI, and a compounding effect of proprietary models built on proprietary data so that this core product value will be defensible over time.

What AI startup would you invest in now, what does it take to be the company that will get your attention? What metrics/aspects do you look at?

BC: I look at TAM, catalysts, whitespace, and the ability to have a compounding effect of proprietary models built on proprietary data. I think that team-product-fit and team-market-fit ultimately drive product-market-fit, so i want to see the right team for the product and market.

The only metric I care about is growth – show me the number you are trying to grow, how fast it is growing, and explain why.

Whom would you like to connect with at AI By The Bay and who should speak to you after your session?

BC: Anyone interested in working at a top secret AI incubator building vertical AI startups that solve the world’s largest and most timely industry problems. :)

I think that AI is in a mania, so I will probably not invest in your startup. Even if I were interested, you can almost certainly get a late entrant AI VC to supply you with a higher valuation.


Meet Bradford Cross at AI By The Bay on March 8 at the AI.Vision day.

These questions keep AI masterminds up at night

image


If you are joining AI By The Bay on March 6-8 in San Francisco, you will be truly lucky to see the best minds behind the AI industry that is quickly evolving and affecting all of us in the process.

You will hear from Stuart Russell - a co-author, with Peter Norvig, of the definitive modern AI textbook, professor at  Berkeley AI lab and head of its new Human-Compatible AI; Jeremy Howard — founder of Fast.AI and its Deep Learning for Coders MOOC, previously founder of Enlitic and President of Kaggle; Richard Socher,  Chief Scientist at Salesforce, previously founder and CEO of MetaMind (acquired by Salesforce), teaching Deep Learning for NLP at Stanford; CEO of Otto self driving trucks Lior Ron and many others.

While one might expect AI By The Bay speakers to know all the answers, there are some AI questions that keep them up at night. We asked our speakers to share what the biggest question in AI for them is. Here’s what they said.


Vitaly Gordon, Salesforce: How to do AI in a really really large scale?

Gabor Melli, Sony Interactive Entertainment: Will the reinforcement learning of deep graphical models also be applicable in the near-term to structured inference tasks?

Richard Socher, Salesforce: Multitask learning for natural language question answering.

Andrew Burgert, Azumo: How will we track societal impact of AI implementations?

Stephen Merity, Salesforce: Progressively learning and transferring not just weights across tasks but actual skills. Relearning from scratch based on a tiny specialized datasets will not scale for the tasks required in the real world. Complex reasoning requires knowledge across disciplines and a wide background of knowledge that no one data source can provide. Without this capability, we’ll find progress stalling.

Lukas Biewald, Crowdflower: How do we make AI easy enough to do that every business can use it?

Daniel Golden, Arterys: I wonder if there’s space for a company to perform independent validation of AI-based tools to help hospitals scrutinize them before committing to their use?

Lyle Ungar, University of Pennsylvania: How to incorporate external knowledge into deep learning? And a bonus question: What do we do to help people whose jobs are automated away?

Chris Moody, Stitch Fix: Most of AI is focused on the science of algorithms - how can we push the envelope with new architectures, new techniques, new hardware - to get better scores? I’m interested how in the reverse: how do we use algorithms to do science? This is a common question in businesses - our data isn’t a means to an end for a better algorithm, it’s inherently useful. I want my fancy deep learning algorithms to help me understand it, not to obscure it in a black box that yields better scores.

Jake Flomenberg, Accel: What’s your data moat?

Karen Tay, Smart Nation Singapore: How applications of AI, such as autonomous vehicles, can benefit society as broadly as possible?

Malika Cantor, Comet Labs: How will automation impact how we view employment as a society?

Kartik Tiwari, Starsky Robotics: With human bias and its impact on the development of a more sophisticated AI, the bias can come in from either the structure/architecture of the learning model or from the training data, what implications will it have?

Michael Ludden, IBM Watson Developer Labs: The single most interesting unanswered question is about, and not to be “punny,” the Singularity. When will the first real General AI emerge and in what form and what will it be like?


Join these AI masterminds and other exciting speakers at AI By The Bay on March 6-8 in San Francisco.

NVIDIA’s Danny Shapiro Names the Biggest Barrier for Successful AI Implementation in Self-Driving Cars


Senior director of Automotive at NVIDIA, Danny Shapiro focuses on solutions that enable faster and better design of automobiles, as well as in-vehicle solutions for self-driving cars, infotainment systems and digital instrument clusters. He serves on the advisory boards of the Los Angeles Auto Show, the Connected Car Council and the NVIDIA Foundation, which works on computational solutions for cancer research.

Giving a keynote at AI By The Bay on March 7, Danny will focus on AI as a heart of self-driving cars; on how Deep Learning enables this AI, and new computing approaches that are required to make it a reality.

We spoke with Danny in advance of his keynote to find out what NVIDIA’as top priority in self-driving cars space is and what’s the biggest myth that Danny wants to unapologetically bust.


What is NVIDIA’s top priority now in the self-driving cars space?

DS: NVIDIA is bringing artificial intelligence and deep learning to self-driving cars, trucks, and shuttles by providing a complete AI computing platform to OEMs, Tier 1s, start-ups, and research companies. For years, the industry has been trying to write software to handle driving functions, but it has been an uphill battle. The reason being, it is impossible to write code that can account for an essentially infinite number of scenarios that can happen while driving. It is just not practical. With the advent of powerful GPUs and access to vast quantities of data, deep learning is giving machines the ability to be taught skills, and in this case autonomous vehicles now have the ability to learn to drive and get smarter over time.

What are key barriers to the fast and successful AI implementation in self-driving vehicles at the moment?

DS: One barrier for a successful AI is the computational horsepower required for in vehicle processing of the massive amount of sensor data. As such we have focused on delivering a supercomputing platform designed for in-vehicle applications. When we introduced our first-generation DRIVE PX in early 2015, it was rated at 2.3 trillion operations per second (TOPS).  Based on customer feedback and the need for greater computational horsepower to process sensor data, we increased the performance of DRIVE PX 2 by a factor of ten, to 24 TOPS. To deliver a truly autonomous vehicle, the AI brain must have incredibly high compute capabilities. For production in the near term, the answer will be our newly introduced small form factor Xavier SoC that will deliver 30 TOPS, while only consuming 30 watts of energy. As deep learning continues to progress and we continue to push the boundaries of automated driving, we are going to need even more computational horsepower for in-vehicle inferencing.

What is the biggest self-driving vehicles myth that you want to bust?

DS: I think the biggest myth right now is that it will take decades until vehicles on the road will have full autonomous capabilities, and we will need to drive for trillions of miles to develop them. What people don’t realize is that OEMs have increased their product rollout cadence dramatically. Automotive technologies such as seat-belts, airbags, and anti-lock brakes took several decades to roll out, and I think many people still assume it will take just as long for autonomous cars. In fact, automotive OEMs realize the need to have their technology design cycles more closer mirror that of consumer electronics, and are accelerating them In addition, simulation is playing a major role in the development and testing of autonomous technologies, which will be able to reduce the number of actual driven miles required for training neural nets. These factors will dramatically shorten the time most consumers will have to wait for a vehicle that is autonomously capable and or course, safe.



Join Danny Shapiro’s Keynote: AI and Self-Driving Cars at AI By The Bay on March 7. Get 20% off with the code NVIDIA20.

Can creativity and art be replaced with AI? Gigster’s Feynman Liang has an answer with the BachBot

The engineering manager at Gigster and a statistics PhD student at UC Berkeley, Feynman Liang focuses on distributed machine learning and practical systems for deploying machine learning in production. He is a contributor to Apache Spark and a recreational producer of electronic music. During his MPhil degree at Cambridge University, he collaborated with Microsoft Research Cambridge and Cambridge University’s Faculty of Music to build BachBot: an AI which composes music in the style of Bach chorales.

If you haven’t tried the BachBot yet, we strongly recommend you do it here and join Feynman at AI By The Bay on March 6 to learn more.


How did the idea of BachBot come about?

BachBot was proposed by Microsoft Research Cambridge during my MPhil in Machine Learning, Speech, and Language Technology at Cambridge University’s Engineering Department. I enjoy composing electronic music in my free time, and techniques I was learning to model sequences of phonemes in speech recognition / sequences of words in language modelling were applicable and relevant to modelling sequences of notes / chords

What is the objective of the BachBot? What do you want to demonstrate?

The goal is to generate Bach chorales which average listeners find indistinguishable to real Bach. To that end, we found that the 759 participants (at the time I was writing my thesis, Sept 2016) who participated in the BachBot.com study could only distinguish real Bach from BachBot 9% better than random guessing.

We wanted to investigate whether creativity and art, things which seem deeply human, could be replicated using statistical learning algorithms. Another objective was to demonstrate the utility of deep learning outside traditional problem domains (e.g. text/speech modelling, image recognition)

What is one thing that still fascinates you about the intelligent bots?

The giant gap between formal theory and empirical studies like BachBot. While they’ve demonstrated profound value in applications across a wide variety of problem domains, explanations for why they work still seems to elude researchers. The outputs are convincing, but the lack of understanding about how the millions of parameters in the deep neural network come together is somewhat alarming: it feels like a black box that’s just dying to be explained through some more concrete theory.

Why should one attend your talk at AI by The Bay, what will you unveil?

In my talk at AI By The Bay I will share how to represent in a computationally amenable format and train a deep LSTM. I will also show results from the largest musical Turing test to date and share insight into what these models are actually picking up on (the results supports many theoretical concepts from music theory).

Whom would you like to meet at AI By The Bay and who should come talk to you at AI By The Bay?

I’d love to meet potential research collaborators - I’ve recently started a PhD at UC Berkeley - and anyone interested in extending / building upon this work. Gigster is also hiring, so software engineers looking for jobs are also welcome!


MeetFeynman at AI By The Bay on March 6 to learn more.

Here’s What Accel Is Looking For In AI Startups

Joining AI By The Bay AI VC panel on March 8 on the AI.Vision day, Jake Flomenberg is a Partner at Accel focusing on next-generation infrastructure, enterprise software, and security investments. We spoke with Jake in advance of the panel discussion and he shed some light on what investors are looking for in AI startups.


With your previous product roles at Splunk and Cloudera, what attracted you in venture capital space?

The internet data centers had to build the software underlying their applications to handle the onslaught of big data. The rest of the world can now leverage similar platform infrastructure software from vendors like Cloudera. There are also bespoke vertical applications like Splunk that serve a largely technical audience. However, we’re still just scratching the surface in terms of how to leverage Big Data to help the average business user. While there will be a few big winners in infrastructure and analytics, there is room for hundreds of winners at the top of the stack building Data-Driven Software (DDS) to transform and reimagine every category of business application.

As a venture capitalist, I’m now lucky enough to play a small, supporting role in the journey of the new generation of DDS companies.


What do you enjoy most in your role at Accel?

Meeting new companies in disparate domains is a tremendously intellectually stimulating experience. In that first meeting, I get to listen and learn from someone who is deeply passionate and knowledgeable about whatever it is that they are talking about. From AI for self-driving cars and security to open source for containers and big data to SaaS metrics for fast growing companies, it’s a constant challenge to have a truly refined point of view and enough perspective to offer guidance and feedback.


What place does investing in AI companies take within Accel’s wider investment strategy?

Unfortunately, AI has become a somewhat bastardized term and is the buzzword du jour which most entrepreneurs feel compelled to stick in their pitch deck.

I’d say that our philosophy is that AI is increasingly a subfield of computer science, and it’s increasingly important for founders to understand when and how to use the tools of the trade. That said we see a bit of abuse going on. Everything is not an “AI” problem. Do you need to hire researchers doing fundamental scientific exploration in this field on your team? Probably not unless you’re Baidu, Facebook, or Google. If you’re building a SaaS app - is deep learning going to be the key to your success or failure in year one? It’s unlikely. Conversely, if you’re looking to do certain things that were near impossible a few years ago with voice or video data then yes, it may be appropriate to leverage one of these fancy new tools at the outset.

At the most basic level, what I look for in all the companies I work with is a fundamental data orientation. The future is clearly in DDS. Simply put, going forward there will be DDS and sh*tty software and I’m going to try to invest in the former over the latter. I’ve found some of the most valuable conversations I’ve had are regarding data collection strategy. What data are you uniquely able to collect (and learn from over time) to build a data moat?

While it’s hard to imagine a company having long term algorithmic differentiation over their competition, a data advantage is much more plausible. Google is a great case in point. Open sourcing TensorFlow points to a belief by management that algorithms alone are not their crown jewels. By contrast, I’m fairly confident to wager that you won’t see them open sourcing their search quality data any time soon.

 

Join Jake Flomenberg (Accel), as well as David Beyer (Amplify Partners), Malika Cantor (Comet Labs), Bradford Cross (DCVC) and Benjamin Levy (BootstrapLabs) at AI By The Bay on March 8th. Book your spot now.

Salesforce Chief Scientist Richard Socher Debunks the Biggest AI Myth

Previously CEO and founder of MetaMind, a startup acquired by Salesforce, Richard Socher now leads the Salesforce’s research efforts and works on bringing state of the art artificial intelligence solutions to Salesforce. We spoke with Richard about his top priorities in advance of AI By The Bay and asked him to debunk the main myth about AI.


What is the most exciting project you are working on right now as a Chief Scientist at Salesforce?

At Salesforce I wear two hats, one as Chief Scientist leading AI research and another in applied AI where I help bring state of the art aI technology to our customers with Salesforce Einstein. I’ll choose an exciting project from each.

On the AI research front, it’d have to be the “joint multi-task” learning model we recently introduced, a single deep neural network model which can learn five different NLP tasks and achieve state-of-the-art results. Our model starts from basic tasks and gradually moves to more complex tasks, which is continuously repeated until our model finishes learning all of the tasks. By doing this, our model allows the tasks to interact each other and improves accuracy.

In terms of applied AI, with Einstein AI features and capability are embedded directly into the entire Salesforce platform and empower everyone– from engineers to customers– to leverage AI to be smarter, more productive and predictive. By doing the heavy lifting and removing the complexity of AI we are able to deliver seamless and scalable AI to Salesforce customers of all sizes.


What are the key challenges for companies that are getting onboard with AI and machine learning?

Data is crucial for AI to work and most struggle with accessing historical data needed to build predictive models. Wrangling this data is expensive both in terms of time and resources and technical expertise is required to properly label outcomes for the AI model to predict. For most companies, the technical complexity and resources is just too much.


What are your priorities in terms of AI and machine learning at Salesforce going forward?

I’m focused on creating the world’s smartest CRM with Salesforce Einstein by advancing the science behind deep learning. This enables the creation of new applications of this technology.

The other priority is creating applications that thoroughly understand language, image and structured data in order to help businesses get answers to any question they have about their data and make more accurate and personalized predictions.


What is the biggest AI myth you would like to debunk?

There are so many floating around today, but I’d say the biggest myth is that AI will become independent, self-aware and take over. We are currently nowhere near anything resembling such a system, and these myths can be distracting from the actual, more pressing issues that could arise.


How can developers, data scientists, and software engineers engage with Salesforce?

Our platforms like Force.com and Heroku make it easy to build add-on apps that integrate into Salesforce’s main applications. We’re also continuing to add new services on top of these platforms that will empower anyone to build AI-powered apps.


You’ve spoken at Text By the Bay in 2015 and Data By the Bay 2016 on Deep Learning with NLP and computer vision. Where are we headed in 2017?

I think it’s time we build a larger, more versatile language and vision system that can do more than one thing, a.k.a multi-task learning.

Right now no one is able to train a single model to do many different kinds of things as once. For example, the question, “Who are my customers?” presents a simple listing task. “Who are my best customers?” adds a complex layer that requires numerous integrated tasks to answer many qualifying questions and make some hard decisions. With questions like the second one in mind, the research team at Salesforce recently created a “joint multi-task” learning model that successively trains with basic tasks, gradually moves to more-complex tasks, remembers all the tasks, and allows them to interact with each other. I’m really excited about it and I think we’ll see more on this front in 2017.


See Richard Socher at AI By The Bay on March 6-8 at The Pearl in San Francisco.

Can Scala help defeat cancer faster? Driver has an answer

This San Francisco based startup Driver analyzes tumors and connects cancer patients with personalized medicine. Stewart Stewart, Software Developer at Driver, also helps organize events at SF Scala. While Stewart is working on his talk on Embedded Logic Programming in Scala at Scala By The Bay on Nov 11, we had a deep(-er) dive into how Scala helps Driver defeat cancer with Tim Gushue and Sameer Soi.

image
image


How are you using Scala and Spark to defeat cancer faster?
We are building a scalable data stack-collection, processing, and classification - to get patients access to the most cutting-edge treatments for stage IV cancer.

What is it about Scala and Spark that make them a good choice for your data engineers and data scientists?
Scala’s integral features - functional paradigm, conciseness, type safety, and inter-operability with the vast JVM ecosystem - lend themselves to efficiently developing robust, production-grade software. Spark is great because it lets us do data science on top of a great language: Scala! In addition, Spark comes close to the ostensibly ultimate goal of allowing both quick prototyping and producing production-grade data science pipelines

How do you work with the community?  How can the community help you solve this key issue of humanity?
While Spark (i.e. MLLib) does help in quickly deploying production-grade data science pipelines, it has a long way to go to have the depth of libraries that R and scikit-learn offer. We can contribute by prioritizing some of the most necessary algorithms and functionality necessary to help Spark rival the R/python communities as the lingua franca of data science.

As a company driven by the needs of cancer patients, having access to a diverse set of well-tested, production-ready algorithms in Spark would help us further a key issue of humanity. In particular, some of our problem domains do not map neatly to other common problems faced by the data science community e.g. smaller training sets, very high-dimensionality, etc.


Catch Stewart Stewart at Scala By The Bay at Twitter HQ on Nov 11 if you want to witness problem-solving in a programming paradigm that can help discover solutions to mind-bending puzzles in minutes. As a bonus, these ideas can be re-used for Scala type-level programming.

AI, bots and Scala at Salesforce talks at Scala By The Bay

As Salesforce are joining Scala By The Bay in full force on Nov 11-13 with seven speakers, we caught up with Matthew Tovbin and Chalenge Masekera, data scientists on the Salesforce Einstein team, to talk about Salesforce Einstein, the artificial intelligence platform that is built into the core of the Salesforce Platform, where it powers the world’s smartest CRM, and give you a sneak peek of their upcoming talks. 


image


●      Tell us more about Salesforce Einstein and the team behind it.

Salesforce Einstein brings the power of artificial intelligence to every Salesforce user, empowering any company to deliver more predictive and personalized customer experiences across sales, service, marketing, commerce and more. Originally, Einstein started in 2014 as a smaller project  (hear more about our story in the behind the scenes interview on the Salesforce Facebook page) and since then, our team has grown at an incredible rate. A team of more than 175 data scientists built AI directly into the core of the Salesforce Platform, to enable companies of any size to leverage the power of AI and easily build AI-powered apps that get smarter with every interaction, using clicks or code.

●      What are some facts about the Salesforce Einstein data science team we should know?

There’s one thing that’s consistent across our team: we all love what we do. Passion and enthusiasm are the traits we prioritize when hiring new data scientists and engineers onto the team, because our mission is to make an impact on Salesforce’s incredible ecosystem of customers. That means, we work on building software that can run hundreds of thousands of models, we work with a huge range of data sets and customers from all industries, and we expect every person on the team to write production code. Impact at that scale is what really inspires us on a day to day basis.

●      What are your talks at Scala By The Bay about and why should one attend?

Matthew Tovbin: I will be discussing key concepts when using Scala and how to write machine learning code for production. In my talk, “Doubt Truth to Be a Liar: Non Triviality of Type Safety for Machine Learning” I will cater to those interested in learning key Scala concepts and how to make practical of use of Scala code.

Chalenge Masekera: I will explore the future with bots. In my talk, “Simplifying Devops with Slackbots,” I’ll describe how to build and deploy a simple bot in relatively no time using Scala. Consider attending the talk if you’re interested in automating processes using bots!

●      What part does scala take in Salesforce’s work and how do you see that changing in the future?

As a team within Salesforce composed entirely of data scientists and engineers, we’re using Scala to make a huge impact across the company. Not only are we democratizing AI to every Salesforce user, but we’re also working to democratize Scala internally. Our team is aiming to create our own Scala library, so other teams within Salesforce can leverage our work and learn to write in Scala. If you want to hear more about our experience evangelizing Scala in the Salesforce ecosystem, come hear our VP of Data Science, Vitaly Gordon, at the panel, “Scaling Scala Teams.”

●      Whom would you like to meet and connect with at Scala By The Bay? Who should come speak to you after the talk? Why are you excited to attend?

Since we’re building out our own Scala library, we’re interested to connect with other data scientists who are building libraries in Scala and learn from their best practices. How have other teams worked with their libraries and how can we?

There are quite a few sessions we’re already adding to our own schedules, including “Deep Learning Around Us” and “Modern Software Architectures and Data Pipelines.”

●      What are your expectations for Scala By The Bay?

We’re most excited to network with like-minded individuals in the field, engineers and data scientists who are interested in the same technologies and building innovative data products. Industry conferences of this nature are an opportunity to share experiences from our respective companies, and learn from the challenges that others have faced with their own products and what solutions they’ve implemented.


To view Salesforce talks, head to Scala By The Bay full schedule and mark your favourite talks using Sched.

Come talk to MediaMath’s Owein Reese at Scala By The Bay. Here’s why.

A hands on manager who continues to write code at work and in his spare time, Owein Reese of MediaMath also loves wind surfing and flying airplanes. Landing in San Francisco from NYC on November 13 to talk about Recursive Functions for Beginners in Scala at Scala By The Bay at Twitter HQ.

We spoke with Owein to get a sneak peak into his talk and find out what place scala takes at MediaMath!

What is MediaMath talking about at Scala By The Bay and why should one attend?

OR: We’re going to talk about recursion. It’s one of those fundamental building blocks of language design and code structure that we’ve found becomes a forgotten relic in most OO language programmers. This is a shame as recursion leads to more succinct code, clearer logical paths within functions and provides a gateway into reasoning about more powerful abstractions. As for reasons to attend, well, if that wasn’t enough…

What part does scala take in MediaMath work and how do you see that changing in the future?

OR: Right now Scala is heavily used in our data analysis systems: big data and machine learning. It’s used in ad serving; think millions of req/s globally with less than 10ms latency requirements. It’s used for CPU intensive asset validation frameworks and even for CRUD apps. It’s pretty much used for a variety of purposes and I see that growing, particularly in the areas of large data due to Apache Spark and Flink adoption.

What are developers’ biggest challenges at MediaMath?

OR: From a bigger perspective our challenges are the same as anyone who has to deal with a high I/O, high uptime, globally distributed collection of services. More explicitly, the first and perhaps hardest problem is the high I/O load, uptime and latency requirements of our ad systems. The second is perhaps the canonical data distribution problem amongst interconnected but separate systems, a la cache invalidation. Next is making sense of our metrics in such a way as to inform us of operational issues and more importantly the direction we need to go to ensure those issues are automatically handled at the system level. Finally, the last challenge is to handle the delicate balance between a monolith vs micro-services architecture layout and the engineering costs to both.

Whom would you like to meet and connect with at Scala By The Bay? Who should come speak to you after the talk?

OR: I’m going to Scala by the Bay to see all the new ideas that are coming out of other organizations. To say it another way, I’m very impressed by some of the engineering design behind Twitter’s Finagle, TypeLevel’s Cats and LightBend’s Akka. That’s just on the library level.


Higher up the food chain I’m also constantly on the look for techniques or different approaches people have taken to solving software development as a whole. I don’t think it’s enough anymore to ensure push button deployments or set up continuous integration/delivery in a micro-services oriented context when those systems have to exist on multiple contents. There’s just so many things that can go wrong with that being able to detect issues while they’re happening and building a system amenable to triaging those issues is more important. We’re smart but other people are just as smart and I’d rather gain their wisdom than have to learn it the hard way.

What are your expectations for Scala By The Bay?

OR: Quite simply I’m looking for just one talk to blow my mind. It’s both a high and a low bar, I know. I’ve found that if there’s one guy out there who makes me think in a way that I’ve never thought before then attending a conference was the right decision.


About MediaMath

At MediaMath, they are responsible for globally deployed systems capable of handling peak traffic of 1.7M req/s in aggregate, with a mix of both I/O and CPU bound services. Watch the video below to learn more about Owein Reese and MediaMath and join Owein’s talk at Scala By The Bay on Nov 13 at Twitter HQ in San Francisco.


65 reasons to attend Scala By The Bay 2016, according to the speakers


With over 80 speakers from Twitter, Salesforce, Uber, Twilio, Spotify, ClassPass and many more covering functional programming, reactive microservices and big data pipelines at Twitter HQ in San Francisco this November,Scala By The Bay is promising to be yet another fantastic gathering of scala masterminds.

If we still have some convincing to do, here are 65 reasons to attend Scala By The Bay, according to our speakers. (P.S. Special shoutout to Jamie Grier!)


  1. A large number of high-level talks. - Sergei Winitzki, Workday
  2. Lots of cool things don’t have very good explanations on the internet; SBTB gives me an opportunity to easily hear about new things that haven’t been publicized much yet. - Buck Shlegeris, Triplebyte
  3. Great people. - Vlad Patryshev, HealthExpense
  4. Meeting more of the community, and learning more Scala! - Brennan Saeta, Coursera
  5. Learning from all the great speakers. - Jean-Marc Soumet, Salesforce
  6. The ability to share and gain the knowledge among the Scala developers. - Matthew Tovbin, Salesforce
  7. Meeting everyone! - Adelbert Chang, Box
  8. The variety of speakers and topics.- Alon Muchnick, Wix
  9. Functions. - John A. De Goes, Slamdata
  10. Reconnecting with people. - Petr Zapletal, Cake Solutions
  11. The chance to get together with other Scala junkies and learn about the new stuff everyone is working on, to continue this “always be looking” cycle I was talking about above. - Sasha Ovsankin, Uber
  12. Getting to see lots of people in the Scala community again. - Li Haoyi, Dropbox
  13. Ability to network with like-minded scala people from around the world. - Kostiantyn Lukianets, ING
  14. Being the dumbest guy in the room. - James Ward, Salesforce
  15. The fantastic speaker lineup! - Leif Wickland, Rubicon Project
  16. Broad range and technical level of talks. - Neville Li, Spotify
  17. The opportunity to fix some of the performance misconceptions that are so common in the Scala community. - Hunter Payne, Credit Karma
  18. The energy of so many Scala programmers doing crazy things to our poor compiler. - Adriaan Moors, Lightbend
  19. Presenting my talk, and possibly meeting some of the authors of the libraries and tools I use. - Daniel Urban, Nokia Bell Labs
  20. Free beer. - Jamie Grier, Data Artisans
  21. Meeting other Scala enthusiasts. - Xinh Huynh, Data Analytics Developer
  22. Seeing what others have done. - Owein Reese, MediaMath
  23. Spontaneous hallway conversations and being inspired to try new things. - Julie Pitt, Order of Magnitude Labs
  24. Getting immersed in the latest and greatest that Scala community has to offer! - Vladimir Bacvanski, SciSpike
  25. Interesting talks. Discussions with people. - Adil Akhter, Ordina
  26. Honestly the hallway session is what I look forward to most at every conference. It’s always great to catch up and meet new people who are doing interesting things with Scala. - Rob Norris, Gemini Observatory  
  27. Cool people. - Nguyen Xuan Tuong (Stanley Nguyen), Symantec
  28. Great people, LOTS of interesting talks and speakers and the venue is going to be awesome! - Daniela Sfregola, Daniela Tech LTD
  29. I’m excited to be around a bunch of really smart people and see the new things they are focusing on. - Dustin Whitney, Project September
  30. The high technical bar. I love the output you guys produce on YouTube. - Adrian Mihai, opening.io
  31. Being at the Twitter office. - Oscar Boykin, Stripe
  32. Vibrant Community. - Alex Kozlov, E8 Security
  33. Sharing my knowledge and learning from others in the most Scala-saturated environment ever. - Nick Stanchenko, Feedzai
  34. Socializing with the SF Scala crowd. - Eugene Burmako, Twitter
  35. Learning about all the different ways people are using Scala to solve their problems. - Dan Simon, Tumblr
  36. Meeting other people I know only from Twitter or Github. - Pathikrit Bhowmick, Coatue
  37. Meeting with like-minded people to share stories and ideas. - Ryan Delucchi, Verizon
  38. Getting to see the sorts of things other people are building. - Oliver Gould, Buoyant
  39. Meeting people! - Stewart Stewart, Driver  
  40. That Twitter can re-introduce itself to the community, and begin an era of participating more closely in core development. - Stu Hood, Twitter
  41. Learning… a lot. - Vincent Guerci, Criteo
  42. It is a gathering of the intelligent minds. - Mohammed Guller, Glassbeam
  43. I can learn more from nice people. - Moon soo Lee, NF Labs
  44. Hearing about how Scala is being used throughout the industry, and learning about other technical stacks and architectures that use Scala. - Stephanie Bian, Brigade
  45. Growing community. - Deepesh Chaudhari, Data Scientist
  46. Meet people. - Flavio Brasil, Twitter  
  47. Scala knowledge sharing and plethora of functions. - Chalenge Masekera, Salesforce
  48. Data scientists and data engineers solving hard challenges. - Alex Ermolaev, Nvidia
  49. Hanging out with other Scala people :) - Holden Karau, IBM
  50. Meeting everyone using Scala. - Jason Swartz, ClassPass
  51. The wide bevy of industrial talks showing how to use Scala to tackle challenging real world problems. - Frank Austin Nothaft, UC Berkeley AMPLab
  52. Meeting other Scala developers. - Nimbus Goehausen, Bloomberg
  53. Opportunity to share how Scala has helped build and scale eero to support the Internet of Things movement. I also look forward to learning how other people and companies have utilized the technology in ways we might not have thought of yet. - Amos Schallich, eero
  54. Meeting SF Scala enthusiasts. - Kaz Sera, SmartNews, Inc. / Good Flow Technologies
  55. The community gathering. - Dick Wall, Escalate Software
  56. The speakers and attendees and all of the conversations. - Michael Pilquist, Comcast
  57. Getting to see a lot of interesting talks and people. - Greg Pfeil, SlamData
  58. Learning about all the new things people are doing with Scala! - Hiral Patel, Yahoo
  59. Panel of speakers. - Alexandre Archambault, Teads.tv
  60. Learning more about the language. - Shubha Nabar, Salesforce
  61. Attending amazing talks by excellent speakers. - James Townley, YoppWorks
  62. Learning from others’ experiences using Scala to do interesting things. - Ascander Dost, SalesforceIQ
  63. Scala and the attendees. - Monal Daxini, Netflix
  64. The combination of being by the Bay while talking about Scala is the most exciting part. - Scott Maher, Twilio
  65. Twitter! - Alexy Khrabrov, Scala By The Bay

If you haven’t made up your mind yet, hurry up. There are only 500 tickets are available and many are gone already! Visit scala.bythebay.io to book your ticket.

Commercializing Scala stack and supporting its open-source ecosystem at Lightbend

With two talks at Scala By The Bay - The Future of Services on Nov 11 and Top Mistakes When Writing Reactive Applications on Nov 12 - Lightbend Scala Team Lead Adriaan Moors and Sr. Director of Global Solutions Architects Jamie Allen will give give comprehensive insights on these two subjects. In the meantime, learn more about Lightbend.

Lightbend (formerly Typesafe) is the company co-founded by Martin Odersky to both commercialize Scala stack and support its open-source ecosystem. These two missions were recently decoupled, with the Scala Center taking over Scala OSS mission, while the company continues to support core Scala compiler and SBT development but also implements Java-first Reactive Microservices strategy.

Lightbend helped with adding backpressure, a key property of reactive systems, to Spark, and supports Spark on Mesosphere. Lightbend is crucial to everything in Scala world and employs some of the smartest software engineers around the world. Jonas Bonér, the CTO, is also the creator of Akka, underpinning many reactive systems and written in Scala.

To join the team check out Lightbend Careers

Get in the Driver’s seat to revolutionize cancer patients’ experience


Driver are revolutionizing the cancer patient’s experience by combining a patient-facing internet platform with the power of cancer genomics and precision therapeutics. Driver is uniquely positioned to implement this new model, as they are bringing together capabilities ranging from sequencing and analyzing tumors to identifying and developing new and targeted cancer therapies.

Driver’s patient platform will change way patients gain access to new and existing therapies. To develop it they are using Play Framework, Scala.js, and other modern scala technologies. They are investing heavily in the Scala ecosystem; they sponsored Scala By the Bay, hosted meetups, and plan to continue engaging the community as they grow.


Driver’s talk at Scala By The Bay

Logic (or relational) programming is perhaps underappreciated compared to its declarative sibling, functional programming. In this talk Embedded Logic Programming in Scala on Nov 11 Stewart Stewart will examine a relational programming language, miniKanren, embedded in Scala as a DSL as some problems (layout, scheduling, type inference, logic puzzles) are more clearly expressed as a set of constraints and relations.


Want to be part of Driver’s team?

For more information about openings on Driver’s web development team, speak to Stewart Stewart at Scala By The Bay on Nov 11.

Avoid Top Mistakes When Writing Reactive Applications with Cake Solutions

Reactive applications are becoming a de-facto industry standard and, if employed correctly, toolkits like Lightbend Reactive Platform make the implementation easier than ever. But design of these systems might be challenging as it requires particular mindset shift to tackle problems we might not be used to.

In his talk ‘Top Mistakes When Writing Reactive Applications’ on Nov 12 Cake Solutions Lead Consultant Petr Zapletal will discuss the most common things he’s seen in the field that prevented applications to work as expected.


Want to know more about Cake Solutions?

Cake Solutions are consultancy company focusing on building reactive and data processing applications using technologies such as Scala, Akka, Spark and many others.

Check their webside: http://www.cakesolutions.net/

Want to work at Cake Solutions? Head to http://www.cakesolutions.net/jobs

See more at Cake Solutions blog: http://www.cakesolutions.net/teamblogs