There is a problem with GPT-3, it cannot go viral

goose with a golden egg

GPT-3, the highly advanced language model developed by OpenAI, has been making waves in the tech industry for its ability to generate human-like text, with the most known application, ChatGPT. However, despite its impressive capabilities, there is a problem with GPT-3 that has been hindering its widespread use: the cost. It is rumoured that OpenAI currently burns about 3 millions/day. This is not sustainable, even for OpenAI, so they are now selling a part of the company to Microsoft for 10 Billions, and probably a lot of that money will be in credit for using the Azure Cloud, where OpenAi is currently running.

You could create a few apps that might be useful or even funny, speeding up the adoption of this tech, and building the next wave of Companies, except, you can’t, and here is why.

The price for using the Davinci model is 0.02 per 1000 tokens. While this might not seem a lot at the first glance, one of my experimental apps that reads the main articles from the current edition of cnn.com/markets, and summarises them, then rates the sentiment, uses around 20 000 tokens, on one use, that’s 0.4 USD for running once. And it doesn’t even do much, it reads about 7-8 articles. Now picture if you want to compile the current news landscape, and let’s say you make an app that reads 100 articles, you would burn close to 6 USD, every time you would ask it to read the current news. Now let’s say you want to build a business out of this, and somehow you can make it go a bit viral, and you have 10000 users, that would be 60 k, every time those users, would run the app, once. Run it multiple times/day, since the news constantly changes, and you can easily see why such a system is impossible to scale, or build any type of business for the general population.

Now ask yourself this:

What is the cost, for you as a content creator, of a YouTube video, going viral, and having 1 million views.

What is the cost, again for you as an app developer, of an app in the App store to be downloaded, and used by 1 million people.

Maybe, this why, there isn’t a code red at Google, after all. Unless, OpenAI, finds a way to reduce the cost by a factor 1000.

GPT in its current form is big, it needs a massive infrastructure, which has a big bill associated with it.

Don’t get me wrong, GPT-3 is cool, like really cool, but it has a viral problem, you can’t make apps that can go viral on it, at least not yet. I really hope this will change, soon.

Do note, that all tech is like this in the beginning, and I do expect, something with similar capabilities, to be able to run locally, sometimes in the next 5 years, probably with some unique algorithm, that is way more efficient. Our brain does far more complex computation, and runs in a very small enclosure, so, I do believe the laws of nature will allow it to happen.

There is a problem with GPT-3, it cannot go viral

GPT-3 use case – summarise current news feed from website and classify the sentiment – open-sourced

Source code: https://github.com/cosmindolha/Scrapper

I had this idea in my had for a few years now. It would be cool, if you could just open an app, and the app will go, fetch the latest news, summarises them for you, and also classifies the sentiment, in this way you could in theory, cover a lot of ground way faster than by browsing trough the news sites on your own. Also you could add filters that would alert on important news, good or bad, for a specific topic, giving you a fast edge, available only at a bigger financial type of companies.

There is a catch, in it’s current form, GPT-3 is way too expensive for this type of app to be feasible (except larger institutions), since it consumes tens of thousand of tokens really fast.

For now, this will stay at a proof of concept type of work.

My DM’s are open 😉 https://twitter.com/CosminDolha

If you are having problems paying for your Apple Developer membership in 2022, try this

I have been trying to renew my Apple Developer membership for a couple of days now, talking with support at Apple provided very little info on what might be going on. The card i use, is the card I have used before, and is being used for other Apple services and purchases, so it works with Apple. Still no luck paying for the Apple Developer program. The support from Apple suggested that I try with another card, well, It would be just weird for me to just open a new bank account and different cards just to try to debug their system, which worked well with my bank, Raiffeisen, and I am not very keen in switching my bank. So what can it be done? I could go and talk to people at my bank, but I thought I will leave it for another step, and give Revolut a try. I quickly opened an account with my Phone, then used a virtual disposable card, and surprise, Apple accepted the payment, so that was the solution. Hopefully this will help other Apple developers around the world to purchase an Apple Developer account.

Practical SwiftUI stuff I learned while building a Photo Viewer for MacOS

Photo Viewer for Mac
Photo Viewer for MacOS made using SwiftUI

If you just want to tinker with all the code, you can get it here.

How to make your SwiftUI MacOS app open a file

You can open the file by using “Command + Click” Open With dialog, or you change the file association to use your app as the default app for that file type.

Since my MacOS app is used to open images, the code is for opening image types:

VStack{
//.....
}.onOpenURL(){ url in
      if let imageData = try? Data(contentsOf: url) {
         DispatchQueue.main.async {
         image = NSImage(data: imageData)
        }
    }
}

Don’t forget to remove the App Sandbox in Xcode so you can access the files in the operating system. Unfortunately this will make your app very unlikely to be accepted in the App Store, but you can still distribute it on your own.

How to make your SwiftUI MacOS App open a file in the same app window

By default, you MacOS app will open a new window each time you open a new file with your program. You rarely will need this, not exactly sure why one would make this behaviour the default, but here is how “fix it”

WindowGroup {
   ContentView().preferredColorScheme(.dark)
                .handlesExternalEvents(preferring: Set(arrayLiteral: "pause"), allowing: Set(arrayLiteral: "*"))
                .frame(minWidth: 300, minHeight: 300)
        }
        .windowStyle(.hiddenTitleBar)
        .commands {
            CommandGroup(replacing: .newItem, addition: { })
   }.handlesExternalEvents(matching: Set(arrayLiteral: "*"))

Download the FastImage for MacOS source code here.

Xcode Testing ML model for your app without running on the device

As of this writing, testing the image detection ML model inside the simulator (I am using Xcode 14.0 beta 3), won’t work if you are targeting iOS or iPadOS, you have to use an actual device. But now I stumbled onto the fact that if you can make your app, run for the “My Mac (Designed for iPad)” target, you can actually test the ML model without installing it on an actual device. So next time you need to work on the integration of an ML model in your app, you can use the “My Mac (Designed for iPad)” target, and move trough development without a problem. I do hope in future versions, you will be able to run ML models inside the simulator.

Extension for inverting the Color of your image in SwiftUI based on .dark or .light theme

I am using this little extension to invert the colors of the icons in my app. The icons are in black and white, and designed first to be used in the .light theme mode, that means they have beed design in black lines, so when the user will switch to dark, the icons should change the lines to white.

struct DetectThemeChange: ViewModifier {
    @Environment(\.colorScheme) var colorScheme

    func body(content: Content) -> some View {
        
        if(colorScheme == .dark){
            content.colorInvert()
        }else{
            content
        }
    }
}

extension View {
    func invertOnDarkTheme() -> some View {
        modifier(DetectThemeChange())
    }
}

//usage example

Image("iconName").resizable().scaledToFit().frame(height: 40).invertOnDarkTheme()

That’s it, now let’s get back to WWDC2022 which will start in a couple of hours.

Listening this week

Sadly, we have lost one of my favourites composers.
In the words of Yanis Varoufakis, “We owe you melodies-soundscapes without which life would be much, much poorer”. For me, I am sure for others too, Vangelis, was a dream enabler.

Error – The VNCoreMLTransform request failed

If you are working with the vision framework and you are getting this error, while testing the iOS app in simulator, well, test it on the actual device and see if the error goes away. Very unfortunate error description.

This was very confusing personally, since the code was working fine while I was using it on an macOS app. Also I did not find any warnings anywhere that you are supposed to only test it on an actual device. Not sure how this makes any sense, but glad that I figure it out, It drove me crazy for the last 2 days.

Today I started coding (using Xcode) on the MacBook Pro 16 2021, here are my first thoughts

Let me start with the main reason I switched from MacBook Air to MacBook Pro 16, easy to guess, screen size. After coding for a while on the Air, I realized that I need a bit more real estate for running the simulator or seeing the changes in the Canvas while working on some SwiftUI component. For a while, I used my old iPad Pro as an “external monitor”, but still I found, I moved my head too much and break the concentration from the current task. I have tried using my desktop monitor, but it is a little too big and again I found that I moved my head a bit too much, weirdly enough, I preferred to go back to my 13 inch screen. This is one of the reasons I actually didn’t went for a desktop iMac, while the bigger screen is good if I would edit a photo or do some drawing, for actual coding, I prefer something smaller.

My Air is light, very light, you can pick it up with two fingers and feels safe in your hand. The MacBook Pro 16 is much more heavier, I can’t pick it up with two fingers at all, I have to actually use the whole hand. But it’s not a problem at all, and is something that I can get used to it. The reason, I do need to quickly hold the laptop in one hand, is that sometimes I need to protect it from the incoming tornado that is my 2,8 year old son rushing in my office and jumping straight on my armchair where I work. He is growing up fast, and with it, is a bit more careful and aware of the tech in the house, so we should be fine.

Is it speedier? Well yes, you can notice it in Xcode, but not by a huge amount. I do think, it will save a bit of time over the course of a year. The quicker the projects runs the better it is for you as a coder. If it takes too much time, you can break the flow. The Air still does a really good job with the M1 CPU, for the size and the price, the Air is still remarkable.

What else, I didn’t hear the fans yet, I suspect I won’t, with just a simple Xcode project. I do expect to hear them when I work on some ML models, will see. Obviously, if there were no fans, there was no heat.

The screen is really nice, and feels just the right amount of size for what I need. Not to small, not too big. Like all Apple screens, it’s comfortable to look at, and while I don’t necessarily work outside in the daylight, I do like that I can bring it with me and while the kids have fun outside, in grandma’s yard, I can work on my projects. I suspect this to happen during the summer days.

Moving stuff to the new machine was pretty easy, and installed all the stuff I need to code, and left other programs on the Air, like for example, Minecraft (education) that I play daily with my oldest son.

The sound is really impressive, while I won’t use it much at home since I prefer my Bose speakers, the sound coming from the MacBook Pro 16 is not something you would expect from a laptop, really deep bass and clear and well balanced in the midrange.

I don’t think I will bring this one on trips with me, because it’s so heavy, and will probably prefer to bring the Air on trips where I only use the system for Photography, even if the MacBook Pro has an SD card reader and I have to carry additional connectors for the Air.

There is definitely room and uses (in my case) for both. Initially the plan was to give the Air to my big son, when he is going to start school in the autumn, but he prefers my older Lenovo Yoga with touch screen, that comes with a bigger screen than the Air. The other question might be whether to buy the Max or the Pro, and the answer should be given by your intended usage, if you mainly use the CPU and rarely the GPU cores, then go for the Pro, if you need the GPU’s then go for the Max.

Well that’s it for now, if you feel that you like the screen size, the MacBook Pro 16 2021 is definitely a good machine to code on.

Today I started coding (using Xcode) on the MacBook Pro 16 2021, here are my first thoughts