Studio Gruhl is a strategy and design agency based in Berlin, that works with clients like Google, Nike, The North Face and Highsnobiety. We spoke with founder Malte Gruhl, and motion designer Joma Frenzel, to find out how they’re using AI, and how they feel about it.
You can see all the articles in this series here.
Broadly speaking, are you excited for how AI will change the design industry, or nervous?
Malte: We’ve been working with AI since 2019. Back then, we also started training our own models. It didn’t look good and it wasn’t that useful, but it was a fun experience.
AI has made huge leaps since then. But the reality is that it’s still statistics, not true intelligence. So AI is a support, but not a tool by itself.
The output can look impressive in a single image or for a specific aesthetic, but achieving consistent results across more complex systems in terms of storytelling, perspective, or even colour grading, is still far from reliable.
Yes, big corporations are downsizing in the hope that AI can help them cut corners. Meaning AI is already having an impact, the question is whether that promise holds.
We should be more concerned about the economic consequences if investors realise these tools aren’t advancing as quickly as promised. The design industry is often among the first to feel the effects of a recession, and to me, the risk of a bubble bursting is greater than the chance of AI becoming a “real” AI anytime soon.
Do you have an agreed policy around AI as a business?
Malte: Yes, we cannot control how AI was trained at the moment, but we can control how and when we use it.
For example, if we use image or video generating tools, we do not include the names of artists or photographers in prompts, and we also avoid using reference images we have not created ourselves. We write our prompts from the ground up and try to stay in control as much as possible, without tapping into someone else’s aesthetic.
When did you realise AI was going to have an impact on design?
Joma: When Sora was first released, many assumed it signalled that replacing a full VFX team with AI might only be a few months away.
Since then, it has become clear that while AI can assist with animation work, it is still far from being precise or stable enough to reliably generate something truly new and distinctive.
Google’s Genie3, for example, introduces a real-time, interactive world model that is compelling to experiment with. Yet, it currently feels more like a tool for exploration and prototyping than a replacement for a team of developers.
These advances do expand the possibilities for smaller teams to take on larger-scale projects, which is a promising direction. At the same time, much of the discourse around new AI integrations tends to highlight ambition more than it acknowledges present constraints.
Have you undergone any AI training, either as a studio or individuals?
Joma: Like most people, we’ve taught ourselves everything along the way. The only way to get good at something is by simply starting to do it.
Since AI tools have only recently begun to be optimised for designers, the learning curve feels like an emotional rollercoaster. There is no “proper” way to learn a tool that takes on a million different forms.
AI is not a magic box. It’s fun. It highlights what real value comes from our own thinking, versus what is merely generated output for the sake of content creation.
We aim to stay conscious of where things originate, and how we can still have the greatest creative impact with the outcome in our own hands. AI tools should serve to accelerate the process, not replace the creative mind behind it.
How do you use AI in the studio’s creative process?
Malte: It is always exciting to explore new tools, incorporate them into the workflow, and see what happens. AI can speed up the workflow, assist in sketching out ideas, and is a great tool for quick iterations.
It can also create a beautiful brand image or help set a visual mood, but as soon as it gets into detail, it becomes less successful.
It’s hard to control the output – compared to consistency in photography for example – and it is difficult to achieve good results outside of common visual clichés, since the training data simply isn’t there. This often leads to longer retouching times and an overall drop in quality.
Do you think clients care if/how you use AI in your work?
Malte: I think they are interested, but we are pretty transparent about what is possible and what is not. What is also quite new to see is that once we have shared a set of assets with AI content, the appetite for real photography increases, because clients start to see the value of tailored, high-quality content.
Joma: There is definitely a push, as we feel that clients now expect more output, believing that anything can be created instantly.
Yet during the process, they usually come to understand that high-fidelity work only has an impact if the concept is strong and the idea is meaningful. Depending only on AI for instant creation tends to produce results without real depth, often missing the clarity and direction that give work lasting value.
Do you use AI for any non-creative aspects of running of your business?
Malte: Spellchecking my emails, using it as a soundboard, or helping to structure loose write-ups. It has also replaced my Google habit by a lot, although this could also have to do with the fact that Google and search engines as a whole have become pretty useless.
Joma: From a technical perspective, AI can shorten render times by offering upscaling and, in some cases, removing the need for conventional denoising tools.
Beyond that, it can enhance detail, allow rapid testing of animation options, and make it easier to review ideas on a larger scale. The benefit depends largely on approaching it with a conscious and considered mindset.
Beyond the best known tools, what is one AI tool that you would recommend to other design studios?
Malte: Krea for upscaling, although Figma is doing that now as well. MidJourney for visual content and short motion tests and Black Forest to dabble around with our own data sets.
Joma: Soon there will be more AI denoising tools, which will be exciting to use directly in the viewport of your 3D software. That’s something I’m really looking forward. to