I’ve been a fan of video games for as long as I remember. But I think I am a bit of a weird gamer. I read a lot about games and I like to try games, but I don’t actually play a lot of games.
When people tell me they logged 200 hours in some game on Steam I’m just wondering how that game kept them interested. I think the only game that I have ever played for more than 200 hours is World of Warcraft.
If I look at my favorite games there’s only a handful of games that really gripped me and made me spent a lot of time on them, and this was mostly in my teens when I had plenty of spare time on my hands. I think I still have plenty of spare time, I just have to be a bit more careful how to spend it than when I was let’s say 17.
I have access to a ton of games on Origin but I just don’t care. I bought the last two Deus Ex games in the hopes to have that spark that I had with the first Deus Ex and I quit playing both after 2 hours with the feeling that I had better things to do.
Last weekend I picked up a Switch and started playing Zelda, and I just couldn’t put it down. It’s been a while since I had that experience.
The past few years my “gaming” fix has mostly been the PS4. I played titles like Drive Club, Dark Souls 3, Uncharted 4, Fallout 4, The Witcher 3 etc.
While I enjoyed these titles I didn’t love them. The Witcher 3 came close but it still had too many flaws. The others were enjoyable but nothing that I felt very strongly about.
Zelda: Breath of the Wild is different.
It draws me towards playing it. The game design is just exquisite. It is a breath of fresh air in a sea of monotonous games that are just more of the same.
If you have a chance to spend some time with this game, do it. Play it from the beginning and take your time. It’s really good.
In January I got an HTC Vive and I started exploring VR. I wanted to write a bit about the journey so far.
I came in thinking VR was “interesting” and that I thought I wanted to know more about it.
Over the next few weeks I wrote down some of my findings. You can read a very messy page of my findings somewhere in the research section of this website; I don’t recommend reading this unless you have a very big interest in VR and want to take a deep dive. It’s a collection of notes in multiple languages without a lot of structure.
I plan to rearrange that content to something better, but first I need to know more. I feel that every week my opinion about VR gets influenced by something new and it’s not “complete” yet.
The main reason I got a Vive is because I loved the possibilities from a user interface perspective. Being a UI designer that wants to explore new UI paradigms I felt I needed to explore this.
This proved to be correct – there is a lot of innovation in UI design within VR that is super interesting. Things are evolving at a breakneck speed and when I read about the work that is happening I can only conclude that some very smart people are working on solving problems in VR right now.
If you check r/Vive there is about a 100% guarantee somebody came up with something new and innovative just this week.
After a few weeks of trying different things and getting friends and family to try it as well I have to say I’ve become quite the believer in how much this might change things in the future.
What I want to stress is that VR is not a “gamer” thing.
While the majority of public VR content is game related, the future of VR is not gaming.
Every domain can benefit from VR.
As a designer I especially like the apps about creation.
Imagine having an infinite amount of Legos to build with.
Imagine being able to paint without having to worry about the paint drying up, or about getting a new frame.
Imagine sculpting a massive dragon statue, then zooming out and replicating it to start working on its twin brother.
The way you work on creations – feels – like creating art in the real world. Except without the limitations.
I can think of so many business applications as well.
You can walk into different spaces without physically moving. I was looking for a house last year and I wasted so much time going into house where after 1 minute I already knew they weren’t for me. The real estate business is going to have a blast with VR.
You can see the scale of things, you can really imagine how something is going to “be”. You can literally walk through something that doesn’t exist yet.
It’s not like getting a flythrough through a 3D model. There’s a certain physicality to it that feels pretty real.
I think Google Earth in VR is awesome. Google Earth on the desktop is flat out boring.
You can imagine how things might fit into your home… or onto you. There’s an infinite amount of possibilities for fashion, for retail.
I wrote about MOOCs last time. What I didn’t write about is that I see free education as a big equalizer.
Being able to go to Harvard online is awesome. When I read that some kid in Africa won a Google code contest through perseverance and hard work [and the access to the internet] that makes me happy.
I was born in Belgium and like most of my friends and colleagues I got a lot of chances that people born somewhere else didn’t get.
My parents got me a PC when I was 10. I had the freedom to learn whatever I wanted in my free time. I didn’t have to work and I had all the time in the world to dig into computers. This enabled me to have the job that I have today.
Not everyone gets the same chances.
What does this have to do with VR? Well, I think VR is the next enabler to have experiences that you otherwise couldn’t have.
The current threshold to VR is quite large. You have to own quite the beefy gaming PC, you need enough room to have the full VR experience etc. The cost for a setup is well over €2000.
But in time this cost will become lower. We will be left with a tiny computer that you can attach to your head, some earbuds to plug in and boom – you are in another universe.
VR will become more accessible, and it will be a great enabler for everyone.
It’s going to change quite a few things. If I was into stocks I would buy into some VR companies. Wait and see if I am right :)
On Twitter, Thomas asked “When and why exactly do you do wireframes?”.
I replied that this really depends on what kind of work you are doing. You will get a different answer from a web designer than from an app designer.
I’ve been enjoying the working file podcast recently. If you are a designer it’s definitely worth a listen.
In Episode 3, the hosts have a discussion about what it means to be a famous designer.
At one point someone notes that we designers are really good at claiming to be the first to do a certain job while that job has actually existed for ages.
This is so true. People start calling themselves “digital product designer” as if this is a new job that never existed before “apps”. This job was just called functional analyst before.
I have this book called Designing Interactions that talks about people are doing the exact same stuff as my job in 1992.
But back to the question at hand: when and why exactly do you wireframe? A lot of people answered that they never wireframed at all or they would just jump straight to either visual design or HTML and CSS.
This strikes me as something you can only do when you do some kind of “template” style marketing websites. Or if you have the exact content that a website should have delivered to you. Or if your focus is entirely on the visual side of things.
Are you doing the problem solving or are you doing the visuals? Are you doing the content or are you trying to make sense of what someone else decided before you? All of this is design, but it’s a different kind of design.
At Mono we help design extensive software that contains tons of functionality like Schoolonline and Ticketmatic.
We are hired as problem solvers.
The visual parts of the interface are important and we strive to make beautiful, delightful interfaces, but there is a big emphasis on solving the actual problem.
A lot of the software we make is pretty extensive. If you would map out all of the unique screens in the kinds of applications we make you would probably find that there are between 50 and 250 unique screens.
In a recent project we evaluated Figma as a design tool. The real-time collaboration features are pretty awesome, but the application has some issues when designing at scale. The performance gets pretty choppy and there’s a lack of text styles.
This brings a new problem to the table which I like to call “designing at scale”.
How do you create and communicate a design that spans hundreds of screens?
Over the years we’ve tried different solutions to designing at scale.
For web apps we rely (a lot) on HTML prototypes.
For native apps we like image-based prototypes – made with tools like Invision – but syncing everything up can be a real pain. Recently I’ve started using Flinto which seems pretty cool.
I’ve also had success with well-documented Keynote files that I like to turn into websites with the help of Keynote Extractor.
And sometimes nothing beats a lot of direct communication with the application developers.
The right solution depends on the software you are making, but for me one thing is certain: you have to be careful about your deliverable weight.
What do I mean by deliverable weight?
Imagine that you are creating the best design in the world, you are pouring many many hours into it, and you try to nail every detail. You proceed to document the design in the best way possible by creating a large presentation that is about every nitty gritty detail to show how much you thought about every edge case.
It’s 200 pages long and it contains just about everything you want to know.
You deliver the presentation to loud applause in the board room. Everyone is happy and you pat yourself on the back for another succesful project.
Six months later you check in with the developers to see what they implemented, and what they did was nowhere close to what you envisioned.
Where did it go wrong?
Did you fail as a designer?
Well, what happened is that you threw a bible over the wall and expected someone to read it.
Just today* we won a new software design project, where the goal is to transition existing software to the web. The “old” software contains over 50 different modules/functionalities.
I’m guessing this will be the biggest Sketch file to date… if we go the route of designing every screen. I don’t think that’s going to be the solution.
A lot of software design is systems thinking, where you try to find out which patterns repeat themselves and find the best solution for that pattern.
When you nail the pattern, you suddenly have a solution for a lot of different parts of the software.
A few days ago I was at a reception and a lady from KU Leuven was talking about MOOCs. A MOOC is a Massive Online Open Course. Think of it as going to university but the whole experience is online.
This reminded me of a MOOC I enrolled in a few years ago called CS50. This was an online version of following Computer Science at Harvard University.
At the time I was wildly enthusiastic about it. I learned about programming concepts to make a small game using Scratch, then applied the same principles in C. The videos that explained the concepts were very high quality and I thought the whole thing was so well done.
One interesting aspect was that there was a CS50 subreddit where you could ask questions to your fellow students. The students would help each other and a community would form. It was very cool.
I ended up only doing a few weeks of the course and then gave up due to lack of time and interest, but nonetheless it was still a good learning experience.
Yesterday I enrolled into a new class… this time on something entirely different: screenwriting. It costs some money — $90 — as opposed to CS50 which was free. But I feel that that’s a small price to pay for valuable content. Onwards!
I wonder if there’s any movement in the CSS framework space. Foundation for apps is tied to Angular which means it’s a no-go for me. Bootstrap seems to be dying under its own weight. They are unable to ship a new version because so many people collectively started depending on it.
To me an ideal CSS framework is based on Harry Robert’s principles like BEM and ITCSS. It would use modern techniques like rem-based sizing and flexbox. This is the type of front-end code that we write for our clients in the ideal situation; but I wonder if there’s any Bootstrap or Foundation alternative that is based on these ideas. I know about InuitCSS but I am really talking about a CSS framework that also has a family of extensive JS components.
I am looking for a programmer for a new project. I have a preference for programmers with experience in both macOS development as Javascript development. You must know your way around the Mac and have a lot of general programming experience using several languages.
The project is about publishing blog posts from the Mac. The programming itself is mostly about file formats, and parsing text and images.
This is a paid project but as this is more of an experimental project funded by myself it doesn’t have the craziest budget. I promise it will be fun though.
I have to choose a design tool for a new wireframe. I want the collab of Figma, the prototyping of XD, and the symbols and detail levels of Sketch. Can’t have it all I guess.
I got an HTC Vive a couple of weeks ago and so far I’ve been exploring how to use it, trying different pieces of software and games.
From a professional perspective I am taking in all the different design patterns. I started work on a blog post about UI design patterns in VR, something I am very interested in.
VR is often seen as a “gaming” technology but the most interesting things that I’ve experienced are not games at all. Discovering places in Google Earth VR or building things in Sunshine3D are among my favorite experiences so far, and I wouldn’t exactly call them games.
A few days ago I purchased Tilt Brush by Google and it is probably the coolest thing that I experienced this week – and it’s not like there weren’t any other cool things going on.
There’s one thing that I like to demo, which is called “Solar system”, contained in Valve’s The Lab. You are placed inside the solar system and you can fly around. What you’ll see is that the sun is freaking huge compared to every other planet. I know we once learned this in biology class… but a dry fact that “the sun is 109 times the diameter of earth” doesn’t stay with you the way it stays with you if you actually see it for real. Well, virtual real I guess.
I was on the fence to buy this piece of tech as it is quite expensive and I wasn’t really sure if I was going to use it often. It’s also not just a question of getting the Vive – you also need the necessary space and gaming PC.
I am lucky enough to have a spare room in my current living space where I could set up the Vive but many of the previous places I’ve lived would just be too small.
After bouncing between the initial “Holy ****! Wow!” and the week-later “did I really spent all of that money?” I can now confidently say that it is very interesting and I definitely don’t regret the purchase.
I posted a message to the Antwerp UI Design Meetup message board to check if there is any interest in a meetup. If you’re interested please let me know there. Also, feel free to comment with your VR experiences or just get in touch about VR.
I started learned webdesign in ’06 and I was heavily influenced by the web standards movement. I was basically schooled by the likes of Zeldman, Jeremy Keith, Dan Cederholm and Andy Clarke.
My only critique that – as a designer of highly interactive applications – the example of progressive enhancement is way too simplistic. But I guess that’s just where I’ve never agreed with Jeremy anyway.
If you want to read said example, I refer you to the part about software in Chapter 6 of Resilient Web Design, where they talk about Editorially (RIP).
I am happy that the concept of progressive enhancement exists and I am a big proponent. But at the same time I feel the need to talk about situations where PE is less than feasible; and where it’s just nonsensical.
Since PE is sometimes presented as a “moral” argument where PE’ers have the moral high ground (“If you don’t progressively enhance you don’t care about users!”) I want to talk a bit about the practicality of said enhancement. This is also partially covered in Chapter 7 of Resilient Web Design which talks about the challenges we face towards the future.
I get that in this React-world there needs to be some education about where we are coming from. If some people think it’s OK to build a simple content website that does nothing when the Javascript doesn’t work, they need to be schooled. Obviously content websites need to be built on a solid HTML base. There needs to be a separation of concerns between structure, presentation and behavior wherever feasible.
The gist of the progressive enhancement logic is that everything can have a fallback, thus creating a web that is less likely to break and more accessible. When the Javascript fails there should still be server-side navigation. When your CSS doesn’t load you should still be able to use the website. The idea is that you layer tech on top of each other and when one piece fails you have a fallback.
The problem is that this is not really true.
Not everything is a content website consisting of pieces that can have “fallbacks”. Vast pieces of software are being built with web tech. Apple’s Pages in the browser is an entirely different beast than your average news website. Enterprises are moving every bit of their software to a web stack.
In theory most of the things in these software packages can be progressively enhanced. A data table can be turned interactive. A tree component can be made accessible, as evidenced by the WAI-ARIA examples on Github.
This is all nice and dandy until you actually try to make it work across different browsers, platforms, screen readers etc.
I guess I have a problem with hardline PE’ers who have only ever created simple content websites, and then take a “moral high ground” stance.
The higher the level of interactivity the harder it gets to create a fallback.
For images you can write an alt text. For video you can create transcriptions. But things get progressively harder and time consuming – as anyone who has ever done subtitling for a two hour conference talk can attest.
Some visual representations of data — like calendars, or scatter plot charts — make no sense to a visually impaired person. You could try to describe the data, but depending on the chart that is nigh impossible.
One fallback solution for some things is to build an alternate way to access the same data. For example, for a calendar that means a list view.
But when you reach a certain level of interactivity there’s just no way to keep the PE argument up.
For example, a lot of tools to create content itself are impossible to be “progressively enhanced”. There is no fallback for your canvas in a drawing app. How exactly do you apply PE thinking to something like Figma?
Last year I worked on a tool that is basically something like Hype in the browser. An animation tool with a timeline. How does this get a fallback? Even if you were to find an appropriate fallback for individual pieces (e.g. a progressively enhanced color picker) you wouldn’t be able to use the application anyway if one thing breaks.
If you’d spent any time trying to progressively enhance something that is that highly interactive, it basically means that you’re spending dev time for a situation that just doesn’t exist. Sometimes dev time spent on PE is just nonsensical.
The argument that it’s not possible to progressively enhance something is sometimes used too soon – but in my experience building highly interactive web apps there are plenty of situations where I can argue that no, it’s not going to happen.
A lot of my job is about making sure software doesn’t break, that we use the right (web) tech, and that we build accessible interfaces. But there are plenty of situations where I just have to tell PE hardliners that the story is just a bit more complicated than they think.