Skip to content

I have worked in technology for over 20 years, with a focus on leading digital product development and machine learning teams to help to transform organizations and bring new products to market.

You can usually find me as CTO, head of AI & analytics, or leading an R&D department. In general, I have a weak spot for NLP, quantum computing, time series and anomaly detection, knowledge graphs, plus advanced pricing and analytics. As well as working in tech I am a trustee of the Institute of Business Ethics, where I help to explore and promote the ethical understanding of machine learning and the usage of AI.

This is my personal blog and general musings on all things innovation, technology and science. Please feel free to get in touch on LinkedIn.

Latest Posts

The multibillion dollar opportunity – the perfect videoconference

Opinion Summary: as remote working has become the norm, the limitations of remote videoconferencing and collaboration technology have become apparent. The perfect remote collaboration platform will include AI that will blur the line between what is real and what is augmented. In the future, our webcam will actually be a digital avatar that will be a photorealistic real-time representation of ourselves – made from merging our webcam input with digital enhancements designed to increase social interaction. In order to make videoconferencing less tiring and more productive, videoconferencing software needs to help surface the information that is most relevant and suppress what is distracting (or embarrassing). I call this Filtered Augmented Reality.

As remote working and webcams have become the norm, so has the complaint of “I can’t hear you, let’s turn off our webcams”, and the fear of the “up-the-nostril webcam picture” and “the family member in the background ruining your big presentation”.

Whilst webcam usage has become more informal, there is still a long way to go before it is completely hassle- and worry-free.

Recently, technology has been released that aims to solve many of the major videoconferencing problems by blurring the line between augmented reality and videoconferencing. The most exciting and novel of these technologies have come from Nvidia’s AI research team. The research team built on Nvidia’s hyperrealistic face generation technology.

Repurposing deepfake technology

Nvidia has been able to reduce the bandwidth required for video transmission by ten times. By limiting the scope/imagination of generative adversarial networks, the hyperrealistic face generation framework, the research team were able to repurpose the AI to generate a specific face. The neural network on the sender’s side creates a model of the user’s face and only transmits the required information the receiving computer needs to “regenerate the face”. This is more than just video compression, this is a form of augmented reality as the regenerated face can be adjusted. The revolutionary improvement is that the face doesn’t have to be a perfect reconstruction; subtle changes can be introduced: for example, ensuring that the image is always “straight on” and not “up the nose” or looking from the side as the person looks at their second screen, and the eyes can be repositioned to maintain eye contact with the audience.

Some might argue that this type of adjustment makes the interaction less genuine as the reconstructed image isn’t a true likeness of the transmitter (and doesn’t accurately convey what they were actually doing), but I think most people will not mind as the interaction will better imitate in-person interactions.

Improved accessibility and real-time translation

A huge benefit of the reduced bandwidth is the reduction in latency, therefore ensuring that frames don’t need to be dropped in order for the video to keep up with the audio (the reason why videoconferencing sometimes judders). By using upscaling AI more usually found in video games (deep learning super sampling) the reconstructed image can also be of better and higher resolution than any cheap laptop webcam could ever produce. Both the improved resolution and frame rate could benefit the hard of hearing by allowing them to better lip-read – something that is not easy, currently, unless video reception is perfect and expensive webcam equipment is used. Nvidia and Google both have real-time language translation (and captioning) that can make international business meetings more seamless. Currently, Nvidia’s technology is limited to audio-to-text translation, but there is no reason why the person’s accent/voice could not be replicated, and their mouth and face adjusted to match. Start-up Descript already has voice manipulation technology that can be used to create synthetic podcasts.

NVidea Maxine Platform

Reducing meeting fatigue

Microsoft Teams have released Together mode. The feature is based on research that people are used to interacting with others with reference to location in a room. Our brains expect the people we are talking with to be in the same surroundings and in a fixed position. Microsoft claims that brain scans indicate that removing people from their surroundings, making them the same size, and putting them into a common surrounding can decrease the mental effort of virtual meetings.

Together mode is meant to assist meeting participants to share non-verbal social cues, such as being able to lean over and tap someone on the shoulder, or to virtually make eye contact (for example when a colleague waffles on during a presentation). Social cues are important – there is nothing worse than giving a virtual presentation and receiving no real-time feedback.

You can do better than a screen share of a PowerPoint presentation

Prezi is an interesting start-up that is trying to make videoconference presentations more engaging by mixing the presenter and presentation. Usually, as soon as the presenter shares their screen/presentation, their face becomes a tiny box that no one continues to look at – the exact opposite of what should happen in an engaging presentation.

Prezi allows for the creation of presentations that incorporate the presenter as part of the presentation, therefore keeping them interacting with the audience.

A fractured ecosystem is ready for consolidation

Remote working has accelerated the uptake of start-ups like Zoom and Miro. Both of these start-ups have excelled at making virtual meetings and seamless collaboration available to those who were previously not heavy videoconference users (mostly non-corporates).

The explosion of video conference software has led to a fractured ecosystem and “installing new meeting software fatigue”. It’s annoying having to install a new meeting client, and learn how it works – “which button is it to share the screen again?!” Datanyze tracks the market share of 96 different web-conferencing platforms/tools.

The fact Nvidia is releasing their advanced AI technologies as an SDK (and not creating their own conferencing platform) will allow smaller software companies , which cannot afford to have large AI research departments, to compete with the larger players. Currently, Microsoft is in a good position to gain large market share in the videoconferencing space. Whilst their collaboration tools aren’t as good as Miro, and their augmented reality is not as good as Nvidia, they are good at nearly everything. The integration with Outlook, and the fact the Microsoft Office ecosystem is so ubiquitous, gives Teams a great market advantage. In the month of March 2020, Microsoft saw a 775% increase in the use of Teams, and didn’t face the same privacy scandal that Zoom faced. The issue is, Microsoft does have a history of dropping the ball; they are only now killing the disaster that was Skype.

Conclusion

The traditional definition of augmented reality is technology that superimposes a computer-generated image on a user’s view of the real world, creating a composite view. I think this definition needs to be updated; the generated augmented reality needs to look real but it doesn’t have to be completely true to life. In order to make videoconferencing less tiring and more productive, videoconferencing software needs to help surface the information that is most relevant and suppress what is distracting. Filtered Augmented Reality will become the norm.

QY Research estimates that the videoconferencing market will grow from $12 billion to nearly $20 billion in 2020. There is a big commercial opportunity for the company that can create the perfect videoconference and collaboration tool.

Introduction to Nvidia Maxine Platform

Links and References

If you can donate your body to science why can’t you donate your data? Wearables could be so much better.

Opinion Summary: Who owns your wearable data? The data collected by wearables and medical IOT devices should belong to users. Manufacturer data silos and proprietary AI are stopping wearables from revolutionizing healthcare. User data needs to be liberated from device manufacturers like Apple, Google, Samsung and Withings as closed ecosystems are limiting the value of the collected data and leading to subpar user experience. People should also have the option to securely donate their data to science. Federated machine learning and smart contracts could allow people to confidencially contribute their data to medical research.

My experience of health IOT devices

I love biohacking and tracking my personal data. As a self-confessed geek, I enjoy experimenting with gadgets and using them to discover more about my own body rhythms. I have the following smart IOT personal devices: smartwatch, under-mattress sleep detector, sleep recording snore pillow, scale, body composition tracker, EEG sleeping headphones, blood pressure monitor, peak breath flow meter, thermometer and an over night blood oxygen reader.

All this tech has taught me the following:

  1. Work stress hugely affects my sleep.
  2. Bad sleep is the root of all my personal evils – everything from managing diet, to willingness to exercise.
  3. None of the readings ever match each other – I have multiple devices that track the same metric and the devices rarely agree.
  4. The apps don’t talk to each other – I cannot get a single view of all my data.
  5. Apps are not great.
  6. Syncing is always buggy.

Recently I found the great YouTube channel of Rob ter Horst, a postdoctoral researcher at CeMM (the Research Center for Molecular Medicine). He spends 11 hours a week methodically measuring and detailing everything about his life using high-quality equipment and tests (temperature, sleep patterns, blood work, urine samples, mood diary, food diary, activity tracking).

Firstly, I was astounded that someone would have the dedication to track this amount of information for two years and secondly, a benefit of the meticulous tracking was that Rob could independently assess the quality of wearables from Fitbit, Withings, Oura and Dreem. His research has confirmed my own impression – the quality of wearables data can hugely fluctuate. For example step counters can overestimate by up to 30%. Below is a chart he shared comparing four activity counters:

Data from Rob ter Horst: https://youtu.be/-T2xvWq5e3o

If the majority of the devices measure the same thing why is the quality so different?

A great benefit of smart wearables is that they are easy to incorporate into everyday life. Except for having to remember to charge them, watches, bands and rings work tirelessly in the background collecting data. This perfectly suits the average user, who in general wants maximum data for minimal effort. This limits the majority of data collection to temperature, motion, location, oxygen level, skin impedance, and pulse rate. The similarity between sensors is why, whenever a new sensor is released, there is such a huge fanfare! One of the main features hyped at the release of the Apple Watch 4 was the inclusion of an ECG sensor. The fact the device received FDA De Novo classification gave the device gravitas.

The other common personal IOT device is the smart scale. The humble bathroom scale first became popular in the 1960s and for many years this was the only biohacking device people could easily access. When the first smart scale launched in 2009 it promised to revolutionize our health by accurately tracking weight and body composition (something I don’t think really happened).

AI is the real differentiator

Raw accelerometer data :https://pavlov.tech/2015/11/30/accelerometer-based-step-counter-in-r/

Raw sensor data is only half of the problem. What people want is actionable insights and not raw tracking data (see above image). The only way a smartwatch, phone or sleep band can tell if you are cycling, running, sleeping, or snoring is by using AI to interpret the raw sensor input. It was only through advancements of machine learning that smart IOT devices were able to make sense of data. Traditional deterministic algorithms would be too complex to write (and not very good). It was only with supervised machine learning that the raw data could be quickly turned into meaningful output.

Due to the need to go to market quickly devices are often released before the AI components have been fully trained. This is why many IOT devices often need regular firmware updates and don’t work very well when first released. In general, the only differentiation between devices is battery life, aesthetics/ui, and the sophistication of the AI.

The cynic would reason Fitbit devices overestimate “steps” because they want their users to feel “fitter” and have bragging rights. This is the only reason I can think of why a $20 USD Xiaomi Mi Band 3 would more accurately detect activity than the Fitbit Charge 3 (but I’m willing to be told otherwise).

Is there something better we can do with all this data?

An issue with all the personal IOT data is it is locked in silos. There is an ecosystem war at play – with Apple (now the owner of Fitbit), Google, Samsung and Withings all competing for our data.

Whilst the data is locked away a user (or their doctor) can’t easily see a holistic overview of their health, nor can the data be used to improve healthcare, enable earlier diagnostics or assist with drug discovery. The elephant in the room is “who owns your data” – the correct answer should be YOU. But this isn’t the case – a lot of IOT and wearable devices don’t let users easily export their own raw data – sometimes the best a user can sometimes do is a lo-res screenshot.

Luckily technology can help us securely unlock our data.

Federated and Gossip machine learning

Federated machine learning turns AI training on its head. Rather than raw data being shipped to a central data warehouse for analysis federated machine learning performs the machine learning on the private device and only transmits the end result. The central server collects the anonymized set of partially trained models and, using smart mathematics, generates a global model. (Google has a great cartoon here explaining federated learning in detail).

Federated Learning

Gossip Learning

Gossip learning extends the principles of federated learning but removes the central server. Devices communicate directly with each other, sharing their trained model, similar to peer-to-peer communication. Whilst this means each device needs to do more computation this should not be a problem. The latest mobile phones include AI chips and the training can be configured to only launch when the mobile phone is being charged.

Combining Open [APIs + Data + Algorithms] could set the data free

Federate and Gossip machine learning can help to secure data but what about if a user wants to share their data? For example, how can a user share their data with their doctor or hospital? Telemedicine is a growing field, but currently a downside is the sharing and collection of patient information is often through dedicated and siloed applications. Governments should promote and enforce standards for the sharing of personal IOT data, similar to PS2 / OpenBanking.

To solve the issue of the lack of standard APIs, start-ups are building APIs that connect to multiple devices and then harmonize the results. For example, Humanapi.co connects to 300+ devices and provides both wearable and clinical APIs.

In 2019, the Open Wearables Initiative (OWEAR) was launched. They describe themselves as follows:

OWEAR is a collaboration designed to promote the effective use of high-quality, sensor-generated measures of health in clinical research through the open sharing of algorithms and data sets.

OWEAR serves as a community hub for the indexing and distribution of open source algorithms. To identify performant algorithms in areas of high interest, OWEAR acts as a neutral broker to conduct formal and objective bench marking of algorithms in selected domains.

We create searchable databases of bench marked algorithms and source code that can be freely used by all, thereby streamlining drug development and enabling digital medicine.

https://www.owear.org/

The main innovation of OWEAR is the sharing of algorithms. By sharing algorithms, they can drive the price of wearables and IOT devices down (by reducing the investment required to develop bespoke high-quality algorithms) and therefore promote their adoption. For wearables and IOT to provide maximum benefit they have to be globally available and not just available to a few wealthy tech geeks; for this to happen they need to be cheaper to make.

If I can donate my body to science why can’t I donate my data?

A new interesting area that is evolving is the ability for people to securely donate their data to medical research.

The Data Donor Movement initiative is promoting the ability for patients to donate their data to medical research. The initiative is petitioning governments to make it mandatory for healthcare organizations to share their data. By sharing data the movement hopes to:

  • Push healthcare toward prevention
  • Aid doctors with early and better diagnoses
  • Provide personalized treatment plans for patients
  • Fuel medical research and lead to groundbraking discoveries in healthcare

In order to promote the donation of data the Data Donor Movement is firmly committed to only selectively sharing data:

Will share withWill NOT share with
Research InstitutionsInsurance Companies
HospitalsAdvertising & Marketing Companies
Government-funded Health Organizations, Like Public Health in CanadaSocial Media Platforms, like Facebook
PharmaUnauthorized 3rd parties

To facilitate the securing of data research has been conducted into using blockchain smart contracts as the enforcement layer. Due to the amount of data being collected the raw data cannot be stored on the blockchain but smart contractors could facilitate the access and decryption of anonymized data.

Smart contracts allow the creation of agreements in any IoT devices which is executed when given conditions are met. Consider we set the condition for the highest and lowest level of patient blood pressure. Once readings are received from the wearable device that do not follow the indicated range, the smart contract will send an alert message to the authorized person or healthcare provider and also store the abnormal data into the cloud so that healthcare providers can receive the patient blood pressure readings as well later on if needed.

A Decentralized Privacy-Preserving Healthcare Blockchain for IoT – https://www.mdpi.com/1424-8220/19/2/326/htm

New sensors + new data sources

In general it’s only when someone has an underlying medical issue that they will invest in additional data collection. For example, the most widely used personal medical device is a diabetes glucose monitor. Device manufacturers are aware of the effort barrier and the lack of differentiation of sensors, therefore they are putting increased effort into more advanced sensors. Here are some devices that are scheduled to be released:

  • Smart Body Scanner – by using computer vision a body scanner can help you to better track your weight and track your body shape. Two examples are NakedLabs and ShapeScale.
  • Smart Toilet – could detect a range of disease markers in stool and urine for example colorectal or urologic cancers. Likely to be especially arrtactive to those who are genetically predisposed to certain conditions, such as irritable bowel syndrome, prostate cancer or kidney failure, and want to keep on top of their health.
  • Glutrac – have developed a watch that can track glucose levels with optical sensors. This could assist with diabetes management as well as potentially tracking diet.
  • Dreem-Headband – can detect brain activity to track your sleep.
  • Neuralink – Elon Musk’s brain and computer interface. Besides letting you control your surroundings it could be used to collect information on your health and brain activity.

The majority of product development has been into non-invasive sensors. The issue with these is they limit the type of insight that can be gathered. Companies like Thriva have stepped in to close this space. They provide monthly or quarterly home blood testing and deliver insights via an app. It would be great to have this data mixed in with the personal IOT data.

Links and References

Crowdfunding platforms have been around for ten years, now. Is it time for them to rebuild trust?

As both Kickstarter and Indiegogo approach teenagehood I thought I would reflect on my own experiences of backing 25 projects. Crowdfunding platforms aim to assist in the funding of projects by connecting creators with willing customers before the product has been made. Crowdfunding platforms help to drive innovation and creativity, as for many small and independent artists or manufacturers crowdfunding may be the only funding option!. From a financial and # projects point of view both platforms have been a great success. Kickstarter alone has raised over $4.5 billion pledges from over 17 million backers! Kickstarter has the larger audience but only supports creators in 18 countries. Indiegogo, on the other hand, is available in over 200 countries.

From a customer satisfaction point of view things are more mixed. Kickstarter has a rating of 1.3/5, and Indiegogo a rating of 1.1/5 on review site Trustpilot. Obviously online reviews are not always representative, but what is more worrying is crowdfunding scams are so common the Federal Trade Commission has a page dedicated to avoiding them.

A major concern for potential backers is being defrauded and scammed by a project that will never be delivered.

Techlicious has five tips on how to assess crowdfunding campaigns:

  1. Is the product too good to be true?
  2. What is the background of the creator team?
  3. How do the creators plan on spending your money?
  4. How complex is the product to manufacture?
  5. What are people saying in the comments?

In spite of me being diligent, two of my 25 projects never delivered. For those that were delivered, in general I would say the items are never as good as I had hoped, and the communication updates were not as frequent or worthwhile as I would have liked. Usually rewards are delivered late (up to a year or two), which would be fine if projects were better at explaining delays. The anxiety of “I’ve not heard from the project for a while, have they gone AWOL?” is always present. Part of the appeal of crowdfunding platforms to backers is to be included in, and part of, the cutting edge of product innovation, as well as having an inside view of the manufacturing process.

All is not bad; I did receive a few items I really liked:

  • Mu One: World’s Thinnest 45W International Charger – means I never run out of charge.
  • ULH: The Ultimate Lens Hood – changed the way I do photography.
  • MOGICS Donut & Bagel – a fantastic power strip device for people who travel.

In spite of these few gems I have drastically slowed on supporting new initiatives. The downside and risk are just too high. One of the few places backers have to go to vent their anger is on Facebook (until the project page turns off comments). When there is nowhere else to go, backers have switched to Facebook’s Crowdfunding Scams & Failures Awareness Group. To rub salt in the wound one of the projects I backed that never delivered still has it’s crowdfunding page up! This is what a fellow backer of the project has to say:

If a project starts to fail comments like this can be common.

Independent Analysis

In 2015 Kickstarter collaborated with Professor Mollick from the University of Pennsylvania in an independent analysis of project performance. The professor reported:

  • 9% of Kickstarter projects failed to deliver rewards
  • 8% of dollars pledged went to failed projects
  • 7% of backers failed to receive their chosen reward
  • 65% of backers agreed or strongly agreed with the statement: “The reward was delivered on time”
Professor Mollic research

The bad reviews are no surprise when only 65% of rewards are delivered on time!

Some Improvement Suggestions

I think it is possible for crowdfunding platforms to rebuild trust and for them to become even better at their mission of connecting the crowd to worthy projects. They could:

  1. Enhance the project vetting process – be clear and transparent on what reference checking has been done for each project. Kickstarter already does some initial investigation but Indiegogo does not.
  2. Put some type of insurance in place so if a backer doesn’t get delivery of the item within a year they get part of their money back. If a project fails, Indiegogo does try and recover monies but this is not always successful.
  3. Better training for campaign owners on how to provide updates to backers. Crowdfunding platforms do try and encourage campaigns to provide updates but often these are repetitive and not informative. There’s only so many times you can see a picture of the same prototype from a different angle. Better updates would go a long way to building trust.
  4. Actively monitor projects and actively update backers if they see a project has run into difficulties, and step in if there is an issue.
  5. Be more visible in “protecting” and ensuring backers get a good deal and are not scammed.
  6. Provide a clear and more visible complaints service.
  7. Reconduct independent research to dig deeper into the running of projects and customer satisfaction.

Some of these suggestions might be difficult to implement as they could change the fundamentals of the crowdfunding business model. Kickstarter describes itself only as a connector and once a project is funded they no longer have anything to do with the money or project delivery. In order to build trust this approach might need to be revisited. In the same way Uber has had to realize that they are more than just a connector between a driver and a passenger I think crowdfunding platforms need to admit that they have a greater responsibility in protecting backers as well as guiding creators.

Crowdfunding platforms perform an invaluable service and have allowed many small creative and innovative projects to come to market that would never otherwise been funded, but in order to grow and to remain relevant they need to find a way of building trust. As they have become larger they have become a victim of their own success as less competent campaigners, or scammers, have used the platform.

Links and References

Early detection of breast cancer is vital to improving patient survival rates. Even though machine learning could help by improving access to testing, there are still many open ethical and legal issues related to AI decision-making.

Cancer is not a single disease but rather a collection of over 200 illnesses which share a combination of underlying environmental and genetic causes. This means that cancer is one of the most complicated diseases to manage and cure. It is therefore great news to hear that in 2021 it is forecast that localized/early detected breast cancer five-year survival rates will be approaching 100%.

SEER Stage2020 Breast Cancer 5-year Relative Survival Rate
Localized99%
Regional86%
Distant27%
All SEER stages combined90%

Survival rates have drastically increased since the 1970s (from 76%). This has been due to a number of advances in primary research and care management:

  • HER2-Directed Therapies – by exploiting a protein that cancers often express doctors are able to directly target cancer cells.
  • Gene Expression Testing – using gene testing to detect the presence of genes associated with breast cancer.
  • Hormonal Therapy – therapy can be used to prevent oestrogen from binding to the oestrogen receptor.
  • Less-Extensive Surgery.
  • Exercising and Maintaining a Healthy Weight.

In the future machine learning could be added to this list.

Machine learning and the ethics of AI in cancer care

Machine learning and big data are not new to drug discovery; both techniques have been used in research to 1) allow scientists to analyse large data sets to find novel correlations between genetics, environment and risk factors, and 2) assist in the automation of genome sequencing.

BenchSci alone publishes a list of over 230 start-ups using AI in drug discovery.

AI and robotic experimentation

Liverpool researchers build robot scientist that has already discovered a new catalyst
University of Liverpool robot

A new area of research is in automated experimentation. In July 2020 the University of Liverpool designed a robot that could automatically run experiments 24-7.

“The robot independently carries out all tasks in the experiment such as weighing out solids, dispensing liquids, removing air from the vessel, running the catalytic reaction, and quantifying the reaction products. The robot’s brain uses a search algorithm to navigate a 10-dimensional space of more than 98 million candidate experiments, deciding the best experiment to do next based on the outcomes of the previous ones. By doing this, it autonomously discovered a catalyst that is six times more active, with no additional guidance from the research team”

https://phys.org/news/2020-07-robot-scientist-catalyst.html

AI and patients

It has already been proven that AI can assist in the research process; the next step is to enable AI to be more actively deployed nearer to patients. This step is vital as early detection is a significant factor in improving patient outcomes. Any technology that can make testing cheaper, more accessible and widespread will be of great benefit. Currently examinations and mammographic scans are expensive and prioritized according to risk profiling. Whilst this makes logical sense not everyone who ends up with cancer falls into a preidentified risk category.

Using AI to assist the medical decision-making process has come a long way in the past two years. In 2018 the Asian Pacific Journal of Cancer Prevention published results that showed that artificial neural networks (ANNs) could identify breast cancer with a sensitivity of 82% and specificity of 90% based on mammographic scans and patient data. In 2020 these results were improved by researchers from Google Health, DeepMind, Imperial College London, the NHS and Northwestern University. The new model performed with similar accuracy to expert radiologists. The study found that if deployed, the model could reduce the workload of a second reader by 88%.

In the future, AI could assist radiologists and doctors in all stages of the care process, everything from diagnosis and risk calculation to clinical decision-support, by acting as a second pair of trained eyes and an opinion giver. Doctors would therefore be empowered to see more patients and provide faster and more accurate results. The aim should be for more people to be seen sooner and more often!

Additionally, research should be conducted to discover whether some patients would prefer automated machine-testing for breast, cervix, prostate and testicular cancer due to the personal nature of the examination, but this brings in to question the ethics of machine-led/only diagnostics.

Ethics of fully automated diagnostics and treatment

Currently, due to legal and ethical reasons, AI cannot fully automate diagnostics or treatment, but can only be used to augment doctors’ and radiologists’ work. Some of the main issues with AI-first decision-making processes are:

  1. Where does liability lie if a “mistake” is made – is it with the doctor, manufacturer, data scientists, or with the hospital who made the purchasing decision?
  2. If a mistake was unfortunately made, how would the patient prove that there was negligence, bias or error? It might be expensive and complicated for them to prove the algorithm’s effectiveness or bias.
  3. Who would insure against a mistake?

The ethics and insurability of AI is still an evolving area which governments should take an active lead in. The European Union and USA have started publishing initial guidelines. In 2018, the EU released guidelines for the lawful, ethical and robust development of AI.

Geographic distribution of issuers of ethical AI guidelines by number of
documents released. Anna Jobin, Marcello Ienca, Effy Vayena https://arxiv.org/ftp/arxiv/papers/1906/1906.11668.pdf

Additionally, academics have been researching the risks, drivers and potential solutions:

Summary of drivers, risks, solutions and desired outcomes for using AI for breast cancer care. https://doi.org/10.1016/j.breast.2019.10.001

Future areas of improvement

Clinical data highlights two areas that specifically need research and improvement; these are the discrepancy between different races/ethnicities, and knowledge of breast cancer in men.

Differences in race and ethnicity survival rates

In their latest report the American Cancer Society reported a discrepancy between the survival rates of African Americans vs white and Asian/Pacific patients. The hypothesis is that this is partly due to later detection and socio-economic factors. This is an area where AI could assist by making access to treatment more affordable and ubiquitous.

Trends in Female Breast Cancer Death Rates by race/ethnicity
Data from: www.cancer.org

Male breast cancer

Breast cancer in men accounts for around 1% of US patients. Unfortunately there has been a 20% increase since 1975, and this can be partially linked to increased obesity. Additionally, men are more likely to be diagnosed with advanced breast cancer. The American Cancer Society attributes this mostly to a lack of awareness. Potentially, a combination of increased education and automated testing could lead to earlier detection of breast cancer. Introducing automated testing at medical facilities could reduce the embarrassment of men asking for their breast to be checked for signs of cancer.

Breast Cancer Awareness Month

Sadly, not all cancers are detected early enough to benefit from the advancement in treatments. I would like to therefore bring attention to Breast Cancer Awareness Month. Please take a look at https://www.wearitpink.org/ to find out more about fundraising opportunities and how they are working to promote breast cancer awareness.

References and Links

Color E-Ink readers products coming to market finally!

I’m sure I’m not the only person who has been hoping for a color electronic notepad that has a real paper feel (but also in color). Microsoft OneNote is great at taking notes, but the problem is if you need to do something quickly, or just jot down some ideas booting up your laptop is a pain. Whilst many people use the Apple/Wacom/Samsung stylus on tablets I find that drawing on glass and the lack of tactile response a put off.

I’ve been drawn to the devices like reMarkable 2, which promises to “replace your notebooks and printed documents with the only tablet that feels like paper.” but without color I am unable to properly express my ideas or annotate meeting notes better than just typing them into OneNote.

Luckily there are two new technologies that could potentially help us find the ultimate digital true paper feel, with tactile response, low eye strain and color.

They are color E INK’s Kaleido and TCL’s NXTPaper.

  • E INK’s Kaleido – improves on existing e-ink technology (for example what is used in Kindles) but adds 4096 colors. The problem is it has similar response times to existing e-ink readers and the colors won’t wow you like a glossy magazine.
  • TCL’s NXTPaper – is built on LCD technology but relies on reflection for lighting. Has faster response time, and greater resolution than E INK, and also uses up to 65% less energy than light emitting LCD’s. The problem is the technology is pretty new and not many people have spent a lot of time with it.

Some products have already come to market in China using these technologies, I can’t wait for them to be more widely released and ready for use! I reached out to reMarkable but their support team did not know when they might be releasing a color version 🙁

Links

Will air taxis and autonomous vehicles have their big break in 2021 or are they be stuck in catch 22?

There are hopes that Volocopter could debut a flying taxi in Singapore in 2021. The German company has reportedly been in talks with the Singapore Economic Development Board, Transport Ministry and the Civil Aviation Authority to bring air taxis to the city state.

Singapore has been often thought of as an ideal place to experiment with novel transportation systems as the city is compact and has an innovation friendly environment, which could both make the roll out of experimental technologies simpler, more likely and less costly to deploy.

Whilst the chances of the project going live are unknown (I’m thinking they are very slight, but hopefully will be proved wrong) it is interesting to see this type of innovation taking place. If a safe, reliable, affordable, and autonomous system could be widely released it could revolutionize transportation in highly built up areas like New York, London, Dubia, Singapore and Hong Kong.

Some places, like India which banned autonomous vehicles in 2019, will be continue to be resistant to autonomous vehicles (for many valid reasons) what both autonomous vehicles and air taxis need is a “big break”, once one city or state has proven that they are safe and can break-even I think there will be plenty who would rush to follow – that’s the catch22.

Links

Proof of Stake vs Proof of Work – What will this mean to Nvidia?

Next year Ethereum will moving to a new algorithmic operating model (Casper FFG or Ethereum 2.0). The major update will be a move from Proof of Work (PoW) to Proof of Stake (PoS). With PoW miners run complex GPUs tasks to mine validation blocks. The Casper algorithm removes mining and moves to verification and validation of new blocks of transactions by block validators, which will be selected according to their stake.

The voting power of each validator will be determined by the amount of ETH they put up for stake. For example, someone who has deposited 64 ETH will have double the voting weight of a validator which deposits the minimum.

The move towards Proof of Stake is seen as an advantage by may as it is more environmentally friendly, as well as being more secure. Any validators that act maliciously will be penalized and removed from the network.

It will be interesting to see how this effects Nvidia. It has been reported that cryptocurrency miners bought 3 million GPUs in 2017! That’s a lot of GPUs that won’t be needed anymore. As Nvidia is the major GPU producer in the world it will be interesting to see how this affects them.

Watch the upcoming Battery Day

Don’t forget today is Battery Day. It will be interesting to see what type of improvements Tesla has managed to achieve with their new battery packs. We are at the verge of electric engines being able to achieve parity with ICE (Internal Combustion Engine) in terms of cost and range.

The big questions I hope will be answered:

Will they have managed to create the million mile battery? Have they managed to make larger and denser batteries? How have they improved the cooling systems? How will solid state or improved chemistry be part of their future?

You can watch live tonight:

 

Agrivoltaics

Growing crops in the shade of solar panels. Not only does this save land, provide alternative sources of income for farmers but can also lead to reduce water usage as plants that grow in the shade take up less water and the ground water evaporates slower.

I think this is an interesting concept we should be watching.

Here is a quick video I found as a useful and quick 5 minute introduction and case study.

#resilience #agrivoltaics