WeAreDevelopers World Congress 2019 – Expectations

Reading Time: 3 minutes

Why this one?

I’m a mobile developer. In specific an iOS developer. So guess what, I love writing code either in Swift or Objective-C for Apple devices. When searching for a conference to attend in 2019 I rather had the WWDC in mind. There is one drawback, however you need some serious money and luck to get there. But in general, there are plenty of other good iOS conferences to attend (NSConf, CocoaConf, …, you name it!). 

For an overview and a brief description of iOS conferences in 2019 I found this page from Hacking with Swift very informative.

There is a very nice overview diagram of iOS conferences depending on your location and/or budget in an old post (2017) at raywenderlich.com. Although outdated, most of these conferences will also happen in 2019 and pricing and location rarely (dramatically) change.

Wait, but the post is about your expectations for the World Congress 2019 in Berlin. So what happened?

Ok, let’s get to this. When searching for iOS conferences in 2019, this conference somehow slipped into the search results. And it did so because Steve Wozniak was speaking there. Well, which iOS/Mac developer wouldn’t want to see and hear the legendary Steve Wozniak, right?

Once on the WeAreDevelopers homepage I realised that Steve Wozniak was giving a talk there a year earlier. So, 2018 in Vienna. Furthermore, this conference isn’t iOS or even mobile specific. That’s another drawback. Anyway, once on the page I skimmed through this year’s speakers. Here are some of them:

  • Rasmus Lerdorf – Inventor of PHP
  • Håkon Wium Lie – Inventor of CSS
  • Andreas M. Antonopoulos – Author, Mastering Bitcoin

Well, maybe Steve Wozniak is not there, but there are a lot of high quality people from IT in general. People who had and have a tremendous impact on millions of developers out there. One can argue about the weaknesses and strengths of PHP, CSS, Bitcoin. But one is for sure. With all downsides, these technologies are used and working on a daily basis. Furthermore, there are vibrant communities behind them.

Ok then. I’am going because I would like to see a broader picture of tech then just a little piece of it.

Btw there are mobile related talks anyway.

Expectations (TL;DR)

  • Experience some good talks from people with impact
  • Get a feeling where we’re headed in the future
  • Maybe see some demos/MVPs on AR/VR, Robotics
  • Get answers where bitcoin and blockchain are going and what the state of disruption is
  • Socialising

Machine Learning

Reading Time: 6 minutes

In days when the IT industry is in full power we have Machine Learning (ML) as segment that we meet every day even not knowing.

The deployment of fast and powerful computers with huge computational power allows this segment of Artificial Intelligence (AI) to grow even faster. I will give a short brief about Machine Learning starting with some definitions, short history when it is used and will check some good frameworks that allow machine learning.

Definition

There are a lot of descriptions on the internet about Machine Learning but I found this as one of the best (source: SAS):

“Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention.”

Artificial Intelligence

We must give enough data to the computer to expect good decision from the computer system.

Machine learning uses many techniques to create algorithms to learn and make predictions from data sets. It is used in data mining which is a technique to discover patterns and models in data sets where relationships are previously unknown.

History

Because of the power of the new computing systems, machine learning today is different from the machine learning of the past. Machine Learning was born from pattern recognition and the theory that computers can learn without being programmed to perform specific tasks. The main goal of the researchers interested in artificial intelligence was to see if computers could learn from data.

One of the most important aspects of ML is the iterative aspect. The models created with ML tools are exposed to new data and they will adapt to the changes without problems. The main characteristic of ML systems is to learn from previous computations and to produce reliable, repeatable decisions and results. This science is not new – but in this moment it has gained fresh momentum.

Arthur Samuel is the first scientist who use the term Machine Learning in 1959 while he worked at IBM. As a scientific endeavour, machine learning grew out of the quest for artificial intelligence. The researchers on the beginning were trying to answer the question “can somehow machines be programmed to learn from data?”. The scientists used various symbolic methods. The term “neural networks” is also correlated with machine learning. Inspired from the human biological neural connections that constitute the brain, the scientist construct neural network framework to connect many algorithms in order to process huge data inputs.

I will give a listed overview of the most significant processes that happened in the past 70 years:

  • 1950s – The first machine learning researches are conducted using simple algorithms.
  • 1960s – Using the Bayesian methods for probabilistic inference in machine learning.
  • 1970s – ‘AI Winter’ this is a period of stagnation of ML.
  • 1980s – Rediscovery of back-propagation causes a resurgence in machine learning research.
  • 1990s – Before 90-ties machine learning used knowledge-driven approach. This is changed in 90-ties to a data-driven approach. Scientists begin creating programs for computers to analyze large amounts of data and draw conclusions – or “learn” – from the results. Support vector machines (SVMs) and recurrent neural networks (RNNs) become popular. The fields of computational complexity via neural networks and super-Turing computation started.
  • 2000s – Support Vector Clustering and other Kernel methods and unsupervised machine learning methods become widespread.
  • 2010s – Deep learning growth will lead machine learning to become integral part of many software services and applications.

Popular Frameworks

There are a lot of tools for Machine Learning. I will give short intro about few of them:

TensorFlow

On the official site TensorFlow is defined as open source software library for high performance numerical computation. This library has flexible architecture that allows easy deployment of computation across a variety of platforms, and from desktops to clusters of servers to mobile and edge devices. TensorFlow was originally developed by researchers and engineers from the Google Brain team within Google’s AI organization.

There are a lot of giants in the world which use TensorFlow. Among the most popular are Uber, Airbnb, Google, SAP, Snapchat, Nvidia, Coca-Cola, Twitter…

Keras

Keras is a Python deep learning library capable of running on top off Theano, TensorFlow, or CNTK. It’s developed by Francois Chollet. This library allow to the data scientists the ability to run machine learning experiments fast.

On the official site of Keras states to use Keras if you need deep learning library that:

  • Allows easy and fast prototyping
  • Supports both convolutional networks and recurrent networks, as well as combinations of the two
  • Runs seamlessly on CPU and GPU

Pandas

Pandas is popular ML library. The goal of Pandas is to fetch and prepare the data that will be used later in other ML libraries like TensorFlow.

It supports many different complex operations over datasets. Pandas can aggregate data of different types like SQL databases, text, CSV, Excel, JSON files, and many other formats.

The data is first put in memory and after that a lot of operations can be make like analysing, transforming and backfilling the missed values.

A lot of SQL-like operations over datasets can be made with Pandas (e.g. joining, grouping, aggregating, reshaping, pivoting). Pandas offers also statistical functions to perform a simple analysis.

Conclusion

Most of the industries that use big data have recognized the value of using machine learning and they use it as tools to grow up. Machine Learning is widespread in financial institutions, governments, health care industries, retail-, oil- and gas industries, transportation…

Facial recognition technology and its effect on health insurance companies

Reading Time: 6 minutes

When I was young, I used to make fun of people who believe that other people can read their future out of hand, and I still make fun of them, but during last year I figured out that with the help of technology, it is now easy to read people’s hidden presence.

As an iPhone user, yesterday, while I was unlocking my mobile using facial recognition, I took a moment of thinking and I found how far we have reached using this technology within all business fields especially in health insurance companies. So as being one of 250,000,000 annual customer of apple (250,000,000 average of iPhone devices sold per year), I became really surprised and fascinated about facial recognition technology to the point I opened up my mouth.

Off late, apparently, that our face has not just become as a way of recognizing ourselves. In the era of computer, which sees more than we do by far and learns faster than us by far. Most important, it sees some really personal secrets. For a second my mind got frozen from happiness as finally I can imagine what the real emotion for Mona Lisa was 😀, RIP da Vinci.

Yeah…. she looks neutral!

In such an amazing computer simulation experiment at Stanford University, two researchers made the computer study more than 35,000 photos from self-identified and heterosexual people from public dating sites and after training the computer, the algorithm was able to correctly differentiate gay and heterosexual men with an accuracy reaching 81% of and 71% accuracy for women. Just from the face, the device (artificial intelligence) becomes able to differentiate, identify and predict the sample based on sexual orientation. At the end of the research, the researches wrote “our findings expose a threat to the privacy of men and women”. Moreover, if you show the device 5 pictures of the same person, its recognition accuracy jumps up to 91% and this just through the face. Such a surprisingly percentage as human mind can only reach 60% of recognition with the same number of photos. From this study, we can see, there is a direct positive relationship between increasing the sample size and level of recognition accuracy.

So from what has been stated above, we can reach a conclusion that by increasing sample size, the accuracy level increases. We can prove our conclusion by the result declared in China. This development of technology is now able to recognize the citizens in a heavily populated country like China in less than 3 seconds. Out of 1,400,000,000 citizens (sample size), this technology is able to recognize any specified citizen within the mentioned duration. Moreover, this technology has applications in different fields as many criminals have been captured using this technology within the same rate. This occurs at present. You can imagine if China develops this technology, which is currently being done, this duration will be minimized and the accuracy level will be maximized according to the above stated conclusion.

Not quite there yet in the real world, but Hollywood already is 😉. ZOOM! ENHANCE!
Selfie time: new AR technology in KFC Original+ Beijing will know what you want to eat 🍔🍟

Last year, the biology scientist Craig Venter, one of the greatest scientists in the field of biology, specifically in DNA sequencing, I have ever seen published a research in PNAS journal, illustrating that using DNA, the device can recognize 80% of the faces. If you give the device 10 DNA, for example, it can expect how 8 persons faces will look like. So, with the sample of 1000, it reached an accuracy of 80% in the expectation. So, if this is applied to eg China, you can imagine how high the percentage of accuracy will be in future. Again our face reflects our gene. If you see this issue as not a big deal, I won’t agree with you. As in 1950s we have exerted every effort just to know the structure of the DNA. So, in a nutshell, what ever is hidden in our genes, we can see it on our faces.

Official Apple TouchID Icon. It seems to be happy…

Some readers may say, it is just an identification technology similar to Touch ID and it is not going to save Titanic. I will tell them, you are right, it is not gonna save Titanic, but it is gonna save many health insurance companies. This technology will have a great effect on health insurance companies as from 30-40% of genes’ diseases (genetic disorders manifest craniofacial abnorma) have effect on faces and skulls as Hart and Hart stated in 2009. Health insurance companies can know a lot about your health without making any blood test, just with your facial recognition analysis, which you do not have any will in. This will help insurance underwriter to minimize the risk as much as possible.

To sum up, just using facial recognition technology, the devices can declare our characteristics, orientations, and also behaviors. So are you still interested in iPhone or you will turn to Android

References