Johan Ronsse

  • Home
  • Projects
  • Blog
  • iPad Pro – Part One – Introduction

    June 21, 2016 - Posted in computers

    This is the start of a research article about the iPad Pro. My goal is to try and use the iPad Pro as a serious working device.

    Apple touts the iPad Pro as the future of computing. They say that it’s an uncompromising vision of personal computing for the modern world. That it makes complex work as natural as touching, swiping or writing with a pencil.

    As someone using a computer to make a living, it’s natural to want things bigger and better. For years we (and then I mean pro users) have been asking for more processing power, more ports and bigger screens.

    If there’s a new development in computing I want to test it. If there’s something that could improve my productivity – even in a minor way – I have to try it. At some point in the not too distant past I had three 27 inch screens on my desk at the same time. Maybe that was a bit too much, but it was worth the try. I wanted all my screens to be retina, I wanted a faster Macbook, I wanted all of it.

    Around that time I moved to Japan, and because of the fact that was in a coworking space moving from desk to desk I had to rely on a single screen to do my work. It was just my Macbook and nothing else. In the beginning I thought I really needed all that screen space to be productive but what ended up happening is that I had some of the most productive work months of my life there – all on a single 15″ screen.

    For years for every Apple Keynote I’ve been hoping for a better Macbook. But I think what Apple is showing us with the iPad Pro is that the future maybe isn’t a continuation of the past. Maybe iOS as a whole is a new break in computing that in the end will make us much happier than the old school world of moving files around.

    I think that with every release they are trying to make it more powerful without turning it into something bad. And maybe they’re succeeding. This is the first one in a series of posts where I examine the iPad pro as a serious computing device. I wrote this blog post on the iPad and I must say that went quite smoothly.

  • A way to export Keynote presentations as proper HTML

    May 23, 2016 - Posted in Uncategorized - 6 comments

    When I give a presentations, my notes are the full text that belongs to that slide.

    It looks like this:

    Screen Shot 2016-05-23 at 14.27.20

    This is great because when you are in “presenter mode” and you stumble over your words, you can just read the slides. I try to rehearse every presentation I do but I don’t always know it 100%. The notes are my fallback to deliver a smooth presentation.

    Now, when I want to share my presentation online I have a problem: the slides are pretty bare and minimal, and the actual content is contained within the notes.

    In the past I reworked some of my presentations to contain the notes text within the slides. It looks like this:

    Screen Shot 2016-05-23 at 14.21.47

    I would then share these on my Slideshare.

    It was a lot of work, and most of the time when you just did a presentation you kind of want to let it go and work on the next thing.

    But I knew that this method conveyed the presentation’s message in a much better way. Some of these slideshows got really popular.

    What’s not ideal is that the text is contained in images. It’s not accessible, it’s not very SEO friendly… it’s against everything I learned when people still cared about web standards .

    Next to that what has always bothered me here is that my content is living on someone else’s platform. Apparently SlideShare has been bought by LinkedIn now which makes things even worse. As hundreds of startups have learned us by shutting down without any export options, you have to own your data. Everyone should have read FUCK THE CLOUD.

    I tried to solve my “slide sharing problem” with video, but that’s even more work. Next to editing a whole video project I spent hours making subtitles for this talk and gave up somewhere around the 10 minute mark.

    This doesn’t work. How do I fix this?

    Some people have argued that I could use something like reveal.js to create the slides, but this is not a solution to my problem. It’s simply an alternative to Keynote.

    For the kind of presentations I do I want to be able to do a bit of layout, drag images in slides etc. It’s very hard to do this in reveal.js style presentations.

    Now, I’ve read presentations on the web in this way before, and I always thought that was a good idea:

    screen shot 2016-05-19 at 15 16 38

    This is from Maciej Cegłowski’s “Internet with a human face“, which you should really check out.

    Another example is Scott Jehl’s Delivering Responsibly, which is very well done.

    Basically it’s just a really long HTML page where the slides are next to the text.

    It’s pretty simple, but how to achieve this without a lot of copying and pasting?

    I wrote Maciej on Twitter how he did his work and he said there’s no trick to it for him – just manual work.

    I don’t like to do manual work when a computer can do it… so I found a way to do it. It involves some steps but it’s nowhere near as labor intensive as doing it manually.

    • First you do an image export of your presentation
    • Then you do another export, in the Keynote ’09 format
    • Rename the keynote 09 export (in .key format) to .zip
    • Unarchive it and find a file called index.apxl
    • Now run this script on the index.apxl file
    • Now you have an index.html file with your notes. Place the output of the image export step in a folder called ‘images’
    • Then install Keynote extractor. You can install it globally using npm install keynote-extractor -g (This code was written by my great colleague Thomas ? Great job!)<
    • Run the keynote-extractor command in the folder containing the index.html and the images folder
    • Tadaa! Here is your presentation.

    I’ll share the slides of my recent talk soon.

    Give it a try for yourself… and let me know what you think.

    P.S. Keynote has an “extract to HTML” option, but what it returns is a big pile of bloat. It also eats your notes in the process. It’s not usable for my use case.

  • A few words on open data, local apps and government-subsidized application projects

    May 3, 2016 - Posted in Uncategorized - 1 comment

    I read the Flemish Government is holding a contest where you can win €20.000 if your application makes use of open data provided by Flanders.

    You have to use a dataset provided by them to be eligible to enter the contest.

    The power of open data is that application makers can combine data from multiple sources to create an app that delivers something that otherwise wasn’t possible before.

    Toon Vanagt who manages data.be has the perfect example. He manages a website that combines data from multiple government sources (Staatsblad, FOD Financiën etc.) to create a website that lists public information about companies in a very good way. A much better way in fact than the official websites.

    A few weeks ago I downloaded an app called Bikey. It lists bike sharing stations in a handy app. And no matter whether you are in Brussels or Antwerp, it just works.

    The interface is beautiful, and if you use these bike sharing systems I can recommend it.

    This is great: instead of downloading a separate app for Velo Antwerpen, the Brussels bike sharing system Villo etc., you can rely on a single application. What would be even better was if this data was integrated in your favorite mapping application.

    A few years ago I helped to make an app called West-Vlinderen. It’s an app that helps you to plan walking or biking routes in West-Flanders. That’s too local: bike for an hour or two from any point in the province and chances are you left West-Flanders already.

    It would be much better if it would help you plan walking or biking routes across the whole of Belgium.

    The data is there, but the money behind the project comes from the West-Flanders tourism organisation. So the natural focus is to limit yourself to one region, which is a pity.

    A quick search for biking route apps on the App Store only returns a handful of terrible looking amateur offerings (search for “fietsknooppunten”).

    Last week I downloaded an app that lists the museums in Antwerp. It should just have been part of a responsive website about tourism in Antwerp. Now the museum opening hours are locked inside an app that nobody can search.

    What we end up with is a lot of local apps that are just too small, either in geographical scope, in vision, or in quality.

    You have to look at the economics behind these things. Creating a good app takes a lot of time and effort.

    The government money is divided between so many levels: federal, regional, province and city.

    The organisations behind these applications are not software houses. They are government organisations that lack the capacities to build their own software. So they perpetually have to rely on agencies to do their work for them.

    Hiring a team to do a good application that is limited in scope can easily run you €50.000. Less than that and you’re going to have to make concessions that nobody likes, like releasing an iOS app only, or knowingly making a worse app than you actually could by choosing for inferior technology.

    The market is updating constantly. If you have an app, every year there is a new version of iOS and Android you have to support. It’s not just that initial cost: there’s an ongoing cost. There’s a graveyard of apps out there that never get past their version 1.0.

    I won’t even link to the app I worked on because I am ashamed of what it looks like now.

    Tax money is used to build apps that really shouldn’t exist in the first place. The government is then proud to report a thriving “app economy”, built on the backs of the highest taxed workers in the world. That’s just stupid.

    It’s the government’s job to provide the open data; then app makers can go to work to build experiences around that. Lower our taxes, provide the data, and let the free market do the work.

    Those museum hours from before? If you Google for a museum, you get the museum hours. That’s because Google got that data from somewhere. Google has a lot of resources. If we want to avoid a world where Google is the only lens to information, data needs to be open.

    I feel that if the government wants to help they should do so by policy: force government organisations to open up their data.

    This is happening right now and that’s a good thing. But there should be enforced quality standards as well.

    If you look at the reality of open data, it is pretty painful.

    If you search the various portals, the quality is just not there.

    There’s the federal data.gov.be, the Flemish open data portal, the Walloon open data portal; but then there are separate sites for Antwerp, Ghent and Brussels.

    Why do we need so many websites for this when all I want are documented JSON APIs?

    A search on TEC (Walloon bus company) data returns 1 Word document and a page returning 404. A Word document is utterly useless as an open data point. This should really be forbidden. Word is not a structured format that can be used for apps.

    And what about the aforementioned contest? I think a contest is the wrong way to go about things. I for one don’t welcome a proliferation of rather amateuristic Phonegap apps made to impress a jury, only to die a quick death afterwards.

    Technically my friends at iRail and Railer can’t apply for the the contest. They are using data about “Belgium”, not “Flanders” – even though they are providing a very valuable application to the people of Flanders.

    Railer is the top rated application to find Belgian train times in the iOS App Store. People like it much more than the official NMBS application. It would really be helped with €20.000 so the developers can find the focus to make it even better. Now it’s literally being made in someone’s free time.

    iRail is doing a Kickstarter-like campaign to find money to develop the Spitsgids; it’s a feature to check which trains are busy and which ones aren’t. The NS (Netherland Railways) has had this feature for a long time.

    If we are to make things better, there’s definitely some more work to be done. To this end, Mono is once again supporting the upcoming Open Summer of Code.

  • Symbol workflow in Sketch 3.7

    April 26, 2016 - Posted in apps

    As posted in a comment on Designer News:

    I love the new Sketch features and in fact they have been helping my current workflow a lot. I am working on a project with a ton of complex forms. My symbols look a bit like this depicting every state of the form:

    This is just 1 small part of the UI.

    Lack of in context editing is the #1 lacking part to the implementation now. I often have to measure how many pixels I have to move something to make it “correct” and then apply it within the symbol.

    I would love some way to have 2 panes open: 1 with your symbols and 1 with your designs. In fact I think I saw a screenshot with just that in an old thread about Sketch 3.6 but maybe I was dreaming.

    The styles syncing is debatable, I don’t think it’s in the way of experimentation and direct manipulation. For me the situation that you are looking at elements on multiple artboards at a time occurs way less than the situation where an inadvertent change triggers a global styling change. For example if you have 1000 layers that use the “standard text” style, and you change it without remembering to make a new style or set the text to “no layer style” you effectively get into a massive beachball situation where Sketch is trying to apply your styles to many layers at once.

    I also have some minor quips about not being able to detach a symbol until it’s “focussed” (you have to left click first and make sure it’s the selected artboard). I think that’s just a bug.

    By the way, did you know you can turn an entire artboard into a symbol? So you can prepare your states separately on small artboards and then match them together using instances. This might not mesh with some people’s workflow but I love it.

    Currently to migitate the in-context editing issue you can try the following: if you position your symbols close to the designs you are working on you can kind of do in-context editing if you have a large enough screen. But it involves a lot of direct manipulation to position the artboards correctly.

    I also have to do a lot of “symbol hunting” now where I have to go and look where Sketch positioned the new symbol. I think it’s placed right of the rightmost artboard but it can be pretty far. A trick to get to a symbol fast is to use Command + 2 to zoom in to a selected symbol in the layers list.

  • Got the 80D

    March 31, 2016 - Posted in photography

    DSC02520-1

    Got my new camera! I am pairing a Canon EOS 80D with a Rode Videomic Go and a 10-22 mm F3.5-4.5 lens (becomes 16-35mm). Time to make some awesome videos.

  • Looking for the perfect video camera

    March 27, 2016 - Posted in photography

    I had a lot of fun learning the basics of photography a few years ago. I was a real gearhead in the beginning, always looking at gear as a way to improve my photography. I would test new cameras (mirrorless was all the rage!) and I would do intricate light setups in my living room to practice product photography.

    But, at some point I realized that to make the images I wanted to make, the hard work wouldn’t be in the technical but in the search for the right subject; and the subsequent work to make the image happen.

    In terms of actual cameras I used I went from a Nikon D70 to a D7000 and eventually to a D800.

    I decided to sell all my “professional” photography gear and got myself a small camera: the Sony RX100 III. If you are looking for a great point and shoot look no further: this is it.

    Now, I was a bit bored with regular “random” photography (travel, daily life etc.) but I had this idea for a side project: I would make a small video on a conference in Tokyo and learn Premiere in the process. This side project threw me into the world of video, which was exciting, because suddenly there was a lot of new stuff to learn again.

    The above video was shot with the RX100 and my iPhone. The RX100 is pretty great for video. It has a nice codec and the quality you can get out of it is so much better than your typical DSLR footage.

    But, as you can probably notice, the audio is terrible. I did the audio with a combination of small mics but that ended up being a lot of work for not a lot of quality.

    I decided I would need a camera that can have an external mic to improve the sound on future videos. I was thinking about what I wanted in a camera and landed on the idea that I need to switch to using digital SLRs again.

    They have better autofocus, the battery life you need for video, a mic jack, a mount for a shotgun mic, they are versatile with lenses etc. So I’m at the point where I want that SLR quality again, but this time with a camera that’s suited to video.

    I tested a EOS 760D (also known as 8000D, it’s the same thing) for a while and I thought: holy shit, this is awesome. Canon had won me over. This thing had a touch screen that was also a fully articulated screen, and the IQ was great. I forgot how good SLRs could be.

    So I did some research and it turns out that the Canon 760D is pretty similar to the Canon 70D.

    I’ve been watching a lot of Youtube lately and the 70D is super popular with vloggers. It’s cited everywhere as THE best camera for vlogging. There are hundreds of Youtubers that have copied famous video blogger Casey Neistat’s setup that includes a 70D on a Joby Gorillapod. Casey is using a 10-22 lens (a quite expensive one at €500). The 70D is an APS-C sensor (1.6x) so this 10-22 becomes an effective 16-35mm.

    This range seems perfect to me for small documentary style videos, interviews, product videos etc. – the videos I am looking to make. Interestingly a few weeks ago Canon announced the 80D. It’s showing up in stores around this time with a €1299 price for the body only. I ordered mine a few days ago and hopefully I will have it next week. I’ll be trying it with that 10-22 lens.

    This 10-22 lens has an EF mount which basically means I can reuse it if I ever buy a full-frame camera again. I hope that in the future Canon comes out with a lightweight full-frame camera. In the meantime I will be making videos with my new setup. Onwards!

  • Looking for a new word processor

    February 10, 2016 - Posted in Uncategorized - 4 comments

    I can’t believe I’m about to ask this in 2016 but I am looking for a new word processor.

    I am currently using Mou but manually writing file references and not having a preview of what your document will look like feels a bit 1989.

    I tried MarsEdit but it doesn’t suit my needs. It should be rich text like Pages and Microsoft Word, but:

    • It shouldn’t mess with my original image files i.e. when I drag a JPG inside the editor, I should be able to still access that JPG in its uncompressed format.
    • I would also like it to export to clean HTML e.g. a paragraph is a paragraph, headings become h2 etc., without extra weird styling tags.

    Anyone knows something out there built for this purpose?

  • Better ways to work: 37Signals

    January 27, 2016 - Posted in Uncategorized

    It’s funny how we think that “everyone knows” something, and it turns out that that really isn’t the case.

    I was having a conversation about what I want Mono to be, and I referenced 37Signals (now known as Basecamp).

    The other person didn’t know that company, and I thought: huh? How can you not know 37Signals?

    For me, 37Signals is a company that inspired me at multiple points in my career. I’ve been reading their blog Signal v. Noise since what feels like forever. They wrote some brilliant books that have influenced the way I think about software. They have some extremely smart people on the team.

    So, I was yammering on about what I want to do with Mono; and the topic turned to 37Signals. They are a great inspiration for how things should be.

    As we are growing, there are decisions to be made about how we deal with certain work aspects within the company: software buying, holidays, the tools we use etc.

    I like 37Signals’s approach to a lot of things. For instance, they have this policy where you work a bit longer in winter, but in summer you only work four days out of five. That just makes a lot of sense.

    At Mono we don’t really have a holiday policy: you just take holidays when you want. The idea is that you let the team know in advance, and you don’t overdo it (e.g. take a three month retreat every year).

    This works for now but at one point we might formalize it more. I’ve been having discussions with other founders and one problem that they point out is that some people just end up not taking holidays, which is not really something we want to promote.

    Another thing I always liked about 37Signals was their policy to give every employee a credit card that the person could use for work related expenses. That implies a high level of trust and cuts down on the red tape.

    There’s so much more to say about 37Signals – for example the fact that they operate as a remote company and wrote a book about it – but this post is already lengthy enough. Onwards!

  • Icon sets

    January 26, 2016 - Posted in Uncategorized - 4 comments

    I am compiling a list of high quality icon sets. Here’s what I have:

    • AIGA symbol signs
    • Glyphish
    • Nova
    • Geomicons
    • Font Awesome
    • Helveticons
    • Entypo
    • Glyphicons
    • Picons
    • Maki
    • Octicons
    • Pictos
    • Symbolicons

    Any favorites welcome in the comments!

  • What is the point of a hackathon?

    January 22, 2016 - Posted in Uncategorized - 2 comments

    These days I am getting all these e-mails to join hackathons about certain topics. I wonder what the organisations that set up hackathons are actually hoping to achieve.

    They are usually set up by government funded organisations (like MIC and VRT) and by big government funded companies (Proximus, Bpost)

    I once participated in Apps for Ghent. I thought it was a nice idea, and I had fun building something quick in an afternoon. We made an app called Doctors in Ghent which, now that I think about it, is totally useless, because a directory of doctors should not be city specific, much less contained in an “app”.

    The event mostly showed how bad the public data was at that point. I think things have improved in recent years thanks to the efforts of some great people (thanks Bart and Pieter!)

    But anyway, back to hackathons. Why would I go to a hackathon? The only reason I can think of is to meet new interesting people, but in that case, why don’t you just a) throw a party or b) host coding nights where people can meet and work together on meaningful projects?

    The people at 10to1 used to do these coding nights and that was pretty cool.

    What I dislike about hackathons is that they always seem to involve the same structure in which meaningful work is impossible: find a random group of people, brainstorm about an idea, and start coding it. In a single day.

    Developing an idea into something meaningful takes months. Actually doing something real takes years. So why would you sit down and try to do something in a single day?

    To me, a lot of this smells like organisations that don’t really know what to do with their (government) money, who have an internal brainstorm session about how to “innovate“ which leads to somebody coming up with a hackathon to “get people to innovate”.

    I recognize that a lot of connections have been made through hackathons, startup weekends and even startup buses, but isn’t the innovation aspect of it a bit of a sham?

← older
newer →
  • ©2025 Johan Ronsse
  • X
  • Mastodon
  • Portfolio 2024