Lessons from “GenAI for developers” course by Google

I just finished the Google Cloud Skills path called “Generative AI for developers” and I wanted to write down some of the main lessons and impressions I had.

Overall I think the lessons were well-thought out, aimed at an intermediate audience like me, and I learned a lot about the various Google tools (here’s my cert). However there were also some technology snags along the way that detracted from the experience.

Main topics of the course

The topics in the course are the following:

  1. Introduction to Image Generation – with an Intro video + quiz re: how diffusion models are used for image generation
  2. Attention Mechanism – Intro video + quiz re: how attention mechanisms work, and can be used for a variety of tasks from text summarization to translation
  3. Encoder – Decoder Architecture – Intro video, Jupyter Lab (Python code) walkthrough + quiz re: how encoder – decoder works, and can be used in sequence to sequence tasks such as text summarization, Q&A, translation etc
  4. Transformer models and BERT – Intro video, Jupyter Lab walkthrough + quiz re: main components of the Transformer architecture and the self-attention mechanism
  5. Create Image Captioning models – Intro video, Jupyter Lab walkthrough + quiz re: how do use deep learning to create Image captions
  6. Introduction to Generative AI Studio – Intro video + quiz regarding Vertex AI for customizing and prototyping Generative AI models.
  7. Generative AI explorer – A set of labs using Jupyter Notebooks to explore different GenAI models, and try out prompting tools and various parameters (see below)
  8. Explore and Evaluate Models using Model Garden – Using Vertex AI to try out different Foundation Models, tools and parameters (see below)
  9. Prompt Design using Palm – How to design good prompts, setting restrictions and interacting with Palm 2, that has detailed reasoning, language and coding capabilities (see below).

Generative AI explorer – with Vertex AI

The lessons that I liked the most were using different examples to teach how to use the models in different use cases. Everything was run on Google  Vertex AI – which is a unified AI platform, that allows you to:

  • Select from 100+ foundation models in the Model Garden
  • Try out code snippets with JupyterLab in the Workbench
  • Access every model with a simple API call
  • Train and deploy models to Production

Text generation examples

You can test any of the Text generation models by selecting the model in the top right dropdown. You specify the context (prompt) for the model, and provide input/ output examples if you prefer. Giving no examples is called zero-shot prompting, giving a few examples is called few shot prompting, which for most use cases is the better alternative.

Pressing the ‘Get code’ button also top right gives you the code snippets you need to connect to the selected model in your Google Cloud project:

Chat model examples

Under the Language models you can customize & test the foundation models e.g for customer service or support use cases. Some key parameters include:

  • Temperature: A lower temperature of say 0, 0.1 are better when factual, true or false responses are required. A higher temperature say 0.6 to 1 will give more imaginative answers, but will also increase the risk of the model hallucinating.
  • Token limit: Determines the maximum number of text output from one prompt, with one token equal to roughly four characters.
  • Safety Settings: You can define how strict the model responses are regarding potentially harmful content, such as hate speech, sexual content or harassment.

Codey – code generation, chat and completion

You can select between different code engines based on your use case:

  • Codey for Code Generation (6k or 32k token limit). Takes your natural language input and generates the requested code – example below:
  • Codey for Code Chat (6k or 32k token limit) – model chatbot fine tuned for helping with code related questions.
  • Codey for  Code Completion – model fine tuned to suggest code based on the context, code already written. 

Exploring the Model Garden

In the Model Garden you can try out different Foundation models such as:

  • Gemini Pro – e.g. for text summarization, text generation, entity recognition, sentiment analysis and more.
  • Gemini Pro Vision (Multimodal) – e.g. for visual understanding, classification, summarization and processing visual inputs such as photos, video, documents, infographics etc.
  • Claude 2: a leading LLM from Anthropic – similar to Gemini Pro
  • Llama 2: Open Source LLM released by Facebook / Meta that is fine-tunable to your use case/ domain.

There are over 100+ models to date, however the main drawback of note is that the Fine Tunable models (except for LLAMA2 and Falcon) are mostly just related to classification or vision detection/ classification.

Prompt Design

Some of the prompting lessons here:

  • Be concise
  • Be specific and well-defined
  • Ask for one task at a time 
  • Improve response quality by including examples (few shot)
  • Use a Dare prompt  – Determine Appropriate Response (DARE) prompt, which uses the LLM itself to decide whether it should answer a question based on what its mission is. Meaning that you send the regular prompt / context first, followed by the Dare prompt to verify that the output generated matches the mission.

Most of the example Jupyter Notebooks in the course can be found on Github here.

Conclusion

Overall I think the course teaches the material well – I liked the hands on Jupyter Notebooks better than the video + quiz sections as it’s easier to learn with concrete examples.

I’m most impressed by is the completeness of the Google Vertex AI platform, and I feel that I have a good basis to use the platform independently today.

CGM Experiment (part 3) – cold therapy post

I’d wanted to learn about the benefits of cold therapy, to extract the physiological and psychological effects, benefits of getting into cold water, especially as it relates to health benefits, insulin sensitivity etc.

As an intro to this topic – this post covers Professor Andrew Huberman interviewing Dr. Susanna Soberg, an expert in deliberate cold and heat exposure protocols, talking about the science and impact of deliberate cold exposure from this study (Cell reports). The interview also covers the importance of the cold shock response and how to approach a deliberate cold exposure protocol.

FYI – this post has been ‘co-written’ using tools such as:

The study

The study was done in Denmark on a male cohort, and carried out by having participants do winter swimming for minimum two days per week, measuring brown fat activation with an infrared camera, and taking fat biopsies. The study was done in a field setting, and participants were encouraged to do the winter swimming whenever they had time. The relevant ‘minimum viable dose’ was 11 minutes of weekly cold exposure.

Why cold therapy?

Cold exposure can improve insulin sensitivity, which can help to prevent type 2 diabetes. 

“ We did see that the winter swimmers had an increased insulin sensitivity. They produced less insulin on all the experimental days. We measured insulin when they were fasting, meaning that they hadn’t eaten in eight hours before the study day. We could see that the winter swimmers had lower production of insulin. Also when they had glucose drinks, the winter swimmers had a faster glucose clearance in the bloodstream. So after two hours, we could see that they had a lower level and the curve went down faster than in the control group.”

Cold exposure can increase brown fat, which is a type of fat that helps to burn calories and generate heat.

“ What happens is that you get adapted a little bit every time you go, like exercise, you get a little bit stronger. So every time you go into the cold water you will feel more comfortable in the cold. You are building your adaptation, which happens on a metabolic level, which is happening via activation of your brown fat. 

The mitochondria in the brown fat cells are gonna be activated, you’ll have more of those and they will be more efficient at heating you up because the body expects you to do this again. The capillaries in your skin will also become better at constricting. So you will have a better shield of your body to prepare you for the next time.”

Also your stress response will subside a bit, so you will have a less increase of your catecholamines with time. With time also you have, because of this activation of your brown fat or your muscles, you will have an increase in your metabolism, which will then make your insulin sensitivity better.”

Cold exposure can reduce inflammation, which can improve overall health, mood and cognitive function.

  • The winter swimmers had lower levels of cortisol at night time – which is beneficial for sleep quality. 

And I think it’s very important to think about the cold exposure and the heat exposure as something that lowers the inflammation in the body. And if we can do that, we will have an open door for preventing lifestyle diseases, right? So for type diabetes, but actually also for some mental diseases as well. So as known as depression and anxiety and also Alzheimer’s disease, which are all associated in research, also newer research showing that inflammation increases the risk of depression, anxiety, and Alzheimer’s disease, neurological diseases. So if we can decrease inflammation in the body, we will decrease our modern lifestyle diseases, but also these increasing mental diseases that we see in these modern lifestyle times.

It’s just exposure to temperature, actually just a cold or to heat that is gonna trick our body into a natural state again and reset it where the homeostasis, the balance is lost a bit. So the body is gonna repair itself in that way. And I think it’s beautiful that we can do that just by changing the temperature of our body. 

Conclusion

Cold exposure is a safe and effective way to improve health. If you are considering trying cold exposure, be sure to talk to your doctor first and follow any safety tips. 

I will incorporate cold baths into my health routines :

  • Tuesday, after cardio session – 4 minutes total time (2 x 2 min dip)
  • Thursday, after cardio session – 4 minutes total time (2 x 2 min dip)
  •  Saturday, after cardio session – 4 minutes total time (2 x 2 min dip)

Thanks for reading

CGM experiment continues – Part 2

I wrote about my first two weeks with the CGM here, and here is a short update with new data. The overview of the data below, the new dates marked in yellow:

I wanted to try some things to reduce the average fasting glucose (FG), so here’s what I did.

Experiment 1 – keto diet

On Saturday May 5th I started a ketogenic diet , and as I’ve already been doing intermittent fasting, and generally I don’t eat a whole lot of carbs, I figured this would be easy to try. Here the results for three days:

Ok, that didn’t go as planned at all 🙁 The massive spike on Sat May 6th is from having one tiny slice of cake… 

Apparently there is something called ‘adaptive glucose sparing’ which essentially means that since your muscle cells prefer fat when you are doing a low carb/keto diet, there will be more glucose floating around in your bloodstream. 

I lost two pounds over the 3-4 days this lasted, and I generally feel leaner. The FG was very stable, but I don’t want to have FG that high in general. There are ‘extenuating circumstances’ (e.g insulin could still be low), but I would’ve needed a blood measurements of fasting glucose and fasting insulin to verify this, so I have try something else..

Experiment 2 – regular, smaller meals

After this I thought “hey maybe I’d just introduce smaller amounts of carbs, eat some breakfast and smaller portions”. This would allow my cells adjust to having carbs again, and with the smaller portions I don’t get as large spikes, here the next four days:

Ouch, not good either. And I wasn’t feeling too great this time, I could really feel sluggish and a bit more tired than usual.

Experiment 3 – back to intermittent fasting

So after this I thought I need to get back at least to where I was before. Here the next four days:

It’s really cool to see that after ONLY one day back to intermittent fasting, my FG normalizes again. I honestly don’t know any other health related measurement where one can see the results this quickly!!

Again I’m not worried about the higher readings around 16-17 as that is where I train, and interesting to note that movie night + salty popcorn does raise FG overnight -as can be seen on 05/15.

Conclusion

Overall I’m learning a lot re: how my body processes glucose/carbs, how intermittent fasting is really helpful for me – and it does make me want to try out some longer fasts (e.g. OMAD – one meal a day) style. Stay tuned.

Generative Agents: Five Bold Examples of AI Revolutionizing Product Development

Image created with Canva, no generative agent used here :-)
Image created with Canva, no generative agent used here 🙂

(first published on Linkedin)

As we are seeing ChatGPT become more widely used, companies of all sizes must ask themselves how do they adapt their products and their competitive strategies in this new world? 

To recap – ChatGPT by OpenAI is a generative agent that is designed specifically for generating text by predicting what comes next in a given sequence. As a generative agent, ChatGPT can create new content, write code, carry out conversations, and even provide assistance in various tasks, depending on the context and the data it has been trained on.

Generative agents are poised to redefine product development, offering unmatched creativity, efficiency, and innovation. Here are five compelling examples of how these AI-powered systems are transforming the way we create and consume products:

Personalized Products: AI-Driven Sneaker Revolution

1.Generative agents will enable brands like Nike or Adidas to analyze user preferences and create customized sneaker designs tailored to individual tastes. These one-of-a-kind shoes will foster deep connections between consumers and brands.

Rapid Prototyping: Hyper-Iterative Rocket Design

2. Companies like SpaceX can leverage generative agents to rapidly generate multiple rocket designs, streamlining the prototyping process, and pulling our sci-fi dreams closer to the present.

Sustainable Design: Eco-Friendly Furniture Evolution

3. Generative agents can help IKEA analyze material data and environmental impact, creating innovative designs that minimize waste and promote sustainability. These eco-friendly products will resonate with environmentally conscious consumers, bolstering IKEA’s brand reputation.

Democratization of Design: Small Business AI Explosion

4. As AI systems become more accessible, Etsy’s small business owners will harness the power of generative agents to create professional, high-quality products. This democratization will unleash a wave of innovation and competition, transforming the online marketplace.

Metaverse Product Sales: The Ultimate Autonomous Agent Experience

5. Generative agents will bring the metaverse to life, creating autonomous agents that interact believably with users for product sales. Imagine the next generation of virtual real estate, where AI-driven real estate agents engage with potential buyers, personalizing the experience and providing valuable feedback to sellers.

Generative agents are set to transform the product development landscape with their AI-powered capabilities. Do you agree, disagree? Pls let me know in the comments.

Blockchain – promises and pitfalls

Hi there,

With the recent hulabaloo re: Bitcoin, Ethereum etc, I thought I’d spend some time getting myself familiar with these concepts and crypto-currencies.

As some-one who has owned a bit of gold for a long time (seen ups and downs) – due to my inherent distrust in governments being able to control their spending – some of the features of crypto-currencies appeal to me.

The benefits :

  • Cryptocurrencies (eg Bitcoin) can be designed to have a cap / limited amount ever to be issued. This means that they should actually be scarce – you know like the resources on this planet – and through that could be good ‘stores of value’ – basically as a currency should be.
  • As a medium of exchange – with the recent Segwit/Bitcoin cash fork, Bitcoin has a chance to become a good medium of exchange – both for micropayments (e.g via the Lightning network) or currency transfers.
  • Programmable blockchains like Ethereum allow for smart contracts to be implemented on the chain. The ‘ethereum computer’ is actually decentralized network of computers that is ‘Turing complete‘ – meaning that it’s pretty much up to you how complex code you want to write on it.
  • Due to the above programmable nature especially of Ethereum – I think we will see many use cases which will be tried – some will fail, some will  succeed. If you are interested in how blockchain could be applied for example in HR BPO – please contact me here.

The pitfalls :

The main pitfall I would say is still that you have to do your own homework on who / what to trust, but I guess that applies in life generally… The other pitfalls still include a truly easy to use user interface / wallet. However I will investigate those more in detail next.

So yesterday I went to a meetup event called ‘Bitcoin and Cryptocurrency Intro How To Make Money Passively From Home’. The event was hosted in a local Panera Bread, with about 15 people attending. The pitch was for a ‘company’ called BitConnect  where the premise is:

  1. You buy their coin (bitconnect coin or BCC) using Bitcoin(BTC).
  2. The BCC is converted back to USD – and using their HFT (high-frequency trading) algorithms they trade USD vs BTC.
  3. Somehow the Bitconnect team are supposedly able to make daily profits (‘interest’) according the presenter/this chart (no down days):
  4. They state that the high market cap (around 786M USD today, August 18th 2017) is proof of the legitimacy of this platform.

OK, call me a sceptic, here’s why:

  • When I asked about the team behind this, the presenter dodged the question, and no information is available that I can find on the bitconnect.co website. No developers who want to have their names publicly associated with it? Hmm. No early investors? Hmm.
  • From the site “Build trust and reputation in bitcoin and cryptocurrency ecosystem with Open-source platform”. There is no information about what is open source, and this code on Github has one contributor..
  • As Steemit writes – it’s too good to be true, no down days, guaranteed returns, referral schemes etc..
  • You are supposed to send BTC to them – but you are ‘locked in’ for up to 299 days. Guess what will happen to your BTC if they run out of new patsies to pay the old ones..

Bottom-line – there is a ton value that can be created on blockchains – but people – pls do your homework. And if you / or some-one you know is considering bitconnect – caveat emptor…

Cheers,

Oskar

Launch festival April 2017

Hi ya,

A few months ago I got a founder ticket to the Launch Festival in San Francisco for April 7th and 8th. After three whirlwind days in the Bay Area here are the things that stuck out to me. Personally for me / Move Correctly my best meeting was actually with Greg, the CEO of Fit3d – since he provided so much actionable, real advice. It’s just fantastic to exchange ideas with some-one in the same industry. Now here are my notes about Launch:

 

Machine Learning

1. The flywheel for Machine Learning – was articulated by Rob May, the CEO of Talla, is the way ML becomes better the more data there is, makes better algorithms, makes a better product, drives more use and more data … So it’s really about who can build the best/ quickest flywheel for your use case / industry, and in that sense (at least the ML companies want to give that impression) this is the time to make major investments in this area, as the first (successful) movers will have a big advantage.

2. Zorroa – are doing really impressive visual recognition from videos – where they can search inside videos / images just as if you were googling documents. So eg locating any scenes where ‘the Rock’ appears, then narrowing it down if it happens in a bank, and further where there is a Lambo in the scene.. Currently you need to plug into their REST API,  but they mentioned a SAAS app in a few months…

3. Corto – their demo of a chatbot analytics interface to a pharmaceutical genomics data was impressive, with hypergraphs / nodes flowing and a lot of complex words in the presentation, so I have no doubt the tech is solid. Their team is chock-full of smart guys, with eg one of the leading AGI guys – Ben Goertzel as Chief Scientist. What wasn’t clear for me is who will sell their product and what is their value prop?

4. The PAC framework by Rob May again, which essentially states that any company should evaluate how they want to use Machine Learning in these categories:

A) P for Predict, eg. in recruitment which candidates will perform best, in sales which products etc.

B) A for Automate, could be easing workflow, say for example NLP (natural language processing) transcribing recruitment interview notes.

C) C for Classify – say classifying best resumes into different buckets quickly

Now apply these questions across your customers, across Product, across Operations, and you should start to identify good opportunities where to apply ML.

5. Talla is a customer service bot either for IT or HR, that’s been trained to answer IT / HR questions, with a UI either in Slack or Microsoft teams. Their target market is in mid-size companies.

6. Kylie.ai –  created a customer service bot, integrated with eg ticketing systems like Zendesk. They’d ‘clone’ employee personalities and create a response integrated into existing UI’s eg on Zendesk, Salesforce, SAP etc, which the human customer service agent can review, modify, approve / send.

Cannabis

The Cannabis market is yuuge apparently as it warranted it’s own vertical, next to healthcare, drones, ML etc. Interesting companies included:

  1. Leaf – built a small growing unit looking like fridge, which automates home growing. Sold about 1M of them in advance and are taking orders for 2018..
  2. Alula Hydro, who have created a hydroponic, nutrient delivery system for industrial growers. The 20K industrial growing management system apparently can raise yields from a crappy 1K per pound to 5K-6K per pound.
  3. Baker, who are making a CRM / loyalty / online store SAAS for dispensaries.

 

Miscellaneous notes

  1.  How to get 1000 applicants for a job ad – by Tucker Max from Book in a box
    • Start with the hook – explain the why / the mission of the company, and if some-one doesn’t believe in that clearly they’re not a fit.
    • Sell the role – Talk like you would talk to a friend about the job. Ditch all the standard corporate lingo about ‘mission critical, pro-active go-getter with a nose for global synergies’. 
    • Pleasure and Pain – explain why the company / role is awesome, but clearly also the downsides of the job, no point in trying to sugar-coat stuff.
    • Testimonials / social proof – we use these in any ads (for soda, cars, books etc), why not for job ads?
    • Finally – the actual ‘boring’ details
  2. VR / AR – was actually a bit underwhelming – nothing really that stood out. Yes, it’s cool to get a new, different type of camera or get analytics off VR, but…eh.
  3. In hardware – Megabots was really cool, just in terms of lighting up any 6-year old inside all of us – Robots with fun big weapons (WEABOONS!) and picking up cars 😉

 

Learning Python programming

This is a short post about my experience in learning programming, and Python in particular – how it’s been going, what’s worked and what I’m struggling with. I’ve spent about two hours per day, five days a week, for about 1-2 months now.

As background I’ve worked in IT (consulting, project management, ERP systems, specifically HR&Payroll) for a long time. When I was younger I never really had an inclination to learn programming – I took one course of Java in college, but didn’t like it as the instructor wasn’t very good.

Here are some lessons / nuggets that I’ve found helpful:

  1. Knowing why I’d want learn is motivating for me. I want to be able to hack simpler solutions myself, and so that I can become a better entrepreneur / consultant. Ever since my experiences with Move Correctly , I’ve found that it’s frustrating to have to wait for developers to complete work.  Waiting can be especially taxing if you choose to go with a fixed price project, which by default leaves you less leverage on the project schedule /completion date…. (separate topic..)
  2. I did the Python course on codeacademy – however I felt that many times there was not enough instruction in the course (no videos), so I would bang my head against the wall for sometime an hour / two trying to get some simple function to work.
  3. The Datacamp Intro to Python for DataScience was quite fun, they award you xp based on successful answers/code, however it wasn’t very challenging and I don’t think that I’d learn to write actual code with their approach. They do have other e.g intermediate courses, but to pay 29USD per month – compared to Udemy’s pricing -doesn’t match up.
  4. In December I started taking the “Python for DataScience and Machine Learning bootcamp” and I’m about 40% through it – mostly the data science parts. After going the through the crash-course I’ve learned about Jupyter notebooks and Python libraries such as Numpy, Pandas, Matplotlib, and Seaborn. I like statistics, and it’s cool to be able to extract meaning out of masses of data for sure. However I have a feeling that this post is correct – ‘Data preparation accounts for about 80% of the work of data scientists‘, and I’m not sure that’s for me… Well -this course has the Machine Learning portions coming up, so let’s see. Overall great value, as I picked up the course for 15 USD.
  5. I’m also taking the “Python Mega Course”  and this has been really great, I can highly recommend it, and great value at 10 USD (year end sale). The best portions so far have been:
    1. Learning to write a Windows GUI program (using Tkinter library), with a connection to a SQL database (SQL lite or PostGreSQL).
    2. Learning to write a Python Flask Web app, setting up a Git/Gitbhub profile and deploying the app to Heroku.
    3. Learning the Bokeh library for data visualization on the web, example here. It’s taking a csv file with volcano locations (latitude /longitude) and using the Folium JS library for the viewing in the browser. If you are interested the code looks like this.
  6. Overall I feel I’m now at the stage where I want to start building applications that I’d be interested to see myself – targeting from start of Feb. I’m also at a stage where I can write simple code for myself, but need frequent references to libraries/google/stackoverflow etc.
  7. I will likely also retain a “trainer / programmer” via freelancer.com to help me with the upcoming challenges. A bit like the Thinkful part-time Python bootcamp, but hopefully cheaper :-). I’m planning to try out say 10 sessions/ lessons with another Python programmer to review any questions / issues I’ll have, as well as concepts/tricks etc.

Here are some development projects I’m considering to try out:

  • Simple game using either Kivy or PyGame frameworks
  • Python Django website with user login using social IDs, Paypal/Stripe integration, user data entry etc.
  • An digital health related data visualisation, perhaps with API pulls.

Look for an update on these within one month.. For now if there’s anything you would like me to specifically work on, pls drop me a line here.