Circular Web Development

To reinvent how engineers weave the Web so that they can carry humanity forward.

The following corpus is a book draft. It will change a lot as I go through the creative process.

Philosophy

The Word Wide Web is the closest thing we have to a god-like entity. 60% of the world population can commune with it from a pocket-sized device, most online activities are monitored, and I can download an app to water my plants: the Web is omnipresent, omniscient, and seldom omnipotent. But as it is shown in ancient myths, the line between god and monster is thin. The web engineer faces a Promethean dilemma, not unlike Dr Frankenstein's.

Web developers bear the duty to act in the best interest of humankind, from the individual level to its whole environment: cultivating and perfecting humanity, without neglecting the very fabric of nature. In this statement lies the purpose of Circular Web Development: to reinvent how engineers weave the Web so that they can carry humanity forward.

More than a philosophy manifesto, circular web development is an in-depth technical guide on how to build sustainable web-based software.

5 Principles of Sustainability - The Framework

As I found out in the first article of this series, sustainability is the balance allowing a population to fully express its potential without endangering the carrying capacity of its environment.

This balance results from the harmony between five forces interacting with each other: the material domain (natural resources), the economic domain (wealth), the domain of life (biodiversity),  the social domain (culture), and the spiritual domain (ethics).

A sustainable project must address all five domains. To do that, Dr. Ben-Eli proposes five corresponding principles.

Material domain: the first principle derives from the fact that all physical processes abide by the first and second laws of thermodynamics. Growth is limited by the law of conservation of energy, so we need to use our natural resources as best as we can. Humans have the potential to act as agents of order that can contain entropy, so we also must do our best to increase the performance of our industrial processes.

Economic domain: the second principle proposes that our approach to economic growth is flawed and that we need to reinvent our conception of wealth to take into account all forms of capital (human, social, manufactured, financial, and natural), measures of human development, and nature's regeneration capacity.

Domain of life: "biosphere diversity has to be maintained", the gene pool of all living beings has to be conserved and diversified.

Social domain: social diversity is the catalyst of human knowledge. All humans should be given maximum freedom to blossom and self-realization opportunities, without anyone adversely affecting others.

Spiritual domain: all sustainable efforts have to take into account the fact that the human spirit seeks transcendence. Without this aspect, we can't be united by "a common purpose to provide a common foundation and stimulate common resolve".

Back and Front

I love being a full-stack developer, and I don't see myself choosing between front-end or back-end anytime soon.

They are two sides of the same coin. As Masanobu Fukuoka once wrote, an object seen in isolation from the whole is not the real thing: a specialist lacks awareness when it comes to grasping the entire picture to make the right product decision.

I have nothing against people who specialize in one area or the other. Choosing between back and front is important when you work at a company to prevent unrealistic expectations from your managers: you cannot expect an employee to juggle between different positions and do an above-average job at each one. We only have a few hours in a day to add value, after all.

But I feel like setting up imaginary barriers between the two limits our creative capabilities and the opportunities that follow. A specialist can easily be replaced when the industry isn't too niche. There are tons of front-end developers and factory workers operating on an assembly line, for example, but few Leonardo Da Vinci.

In fact, experts and masters aren't specialists. They aren't just great at a single thing and constantly push their boundaries. David Goggins could be perceived as a fitness specialist at first glance, but he is also a proficient speaker, writer, entrepreneur, Navy SEAL, and a firefighter. Santiago Calatrava is a famous architect, but also an engineer, a painter, and a sculptor.

When the doors are closed, you can't afford to be happy with being a specialist. You are a human being with a very diverse set of interests and a unique outlook on life: it's your duty to merge all these specialties together to come up with something that's yours. Your very own indescribable specialty, if I may say so.

Becoming a generalist is a powerful weapon, because it allows us to judge of a problem in its entirety. We are then free to learn what we lack to solve the problem ourselves, or we can formulate the problematic to a specialist that will give us the keys to its resolution. Free from the chains of a label given to us, we can do anything.

Big Evilcorps

There was a time where I wanted to work at a big tech megacorp. 6 digits paychecks, a good employee package, respect, a good line on the resume, and important challenges to solve. A "Dream Job".

Then I read about what's a typical day at Google/Facebook/Apple/[insert relevant company] and I wasn't so excited anymore.

Then I watched Mr. Robot and the question of ethics came out.

I wanted to rebel, I wanted to become a counter-power with a strong moral compass.

Startups appeared as the antithesis of the typical evilcorp. At first.

After co-founding one and going through an incubator, I understood 99% of the 10% surviving startups are fated to become evil: to grow too big for the sake of their users, or to be acquired by a megacorp.

Startup founders are no rebels, they are destined to feed the vicious circle of venture capitalism.

We live in a tightly-coupled system where everything is linked: there is no living independently from big corporations.

I am tempted to say "only the strong survive", from a purely Darwinian perspective, but it's incorrect: only the really big or the really small strive. Humans cohabiting with bacterias.

One might argue indie businesses are different from VC-backed startups in this regard. Not true. Any service you use is somehow linked to a global country-sized company: you depend on Google for SEO, or Amazon for infrastructure, or Apple for the iOS app market, etc.

When you buy from a business you participate in the growth of its partners. It's especially true for B2B and B2B2C businesses.

How to break free from big corporations to become truly independent then? Is it even possible to be free? These are questions worth living for.

You might be tempted to try changing things from the inside. It's a pretty na√Įve statement I often hear. The reason why it doesn't work is pretty simple: companies are no democracy. An individual going through a particular environment for a long stretch of time always ends up molded by it, even unconsciously. Decisions are taken by those who lead, and reaching the top of the ladder is a matter of amoral politics, not the result of a strong ethic driven by virtue and wisdom.

There is only one way to be truly independent, it's called self-sufficiency.

Being indie from the start is impossible, it's a constant effort. Self-sufficiency is the result of a maker mindset: the will and the ability to do things yourself with others.

You can be self-sufficient by yourself or in a community. All that matters is agreeing on a common set of actionable values: privacy-first, openness, collaboration, etc. - considering carefully the business partners who will accompany you on your journey is of utter importance.

The more you grow, the pickier you can become when it comes to taking business decisions. You can develop more features in-house, and the quest for mastery always pushes you to improve and know more of what matters to fulfill your business needs.

Making a business is political. You need leverage. You need real customers who will help you to help them. Financial independence is indeed a central concept in self-sufficiency. I don't think you can consider yourself independent when all your actions are driven by the need to please your investors. Investors can also be real customers, but they are incredibly rare and need to share your values, which is why it's more of a matter of fate to meet the right ones. Looking for investors for the sake of raising funds is a dead-end: ramen profitability must drive the founders from the start to create a sustainable business. Then the right investors will come to you, not the other way around, and you will have the leverage to keep your vision in check.

An indie product is a garden taking time and patience.

Is it even possible to reach self-sufficiency? Ironically, I think mega-corporations show us it is, to a certain level. Contracts should never prevail over laws, and companies are always required to submit to countries. There is no living outside society. On the other hand, you can consider Apple or Google to be self-sufficient from a purely economical point of view. Apple doesn't need Google to sell iPhones, it has enough leverage to attract others to their organization.

to be edited and continued another day...

Challenges of Custom Website Development

Websites have become a commodity: it's never been easier to make yourself one.

The demand for websites keeps increasing, but web agencies are progressively being replaced by automated SaaS solutions that do not necessarily require to know how to code to be used.

On the other hand, websites aren't as simple as they used to be and are closer to full-blown web applications.

This balance between complexity and ease of access has a cost called performance. Developers and agencies specializing in website development will have to address this problem to stay competitive.

The website development industry is similar to the car industry in many aspects. The question is not whether or not we can make websites, it's how well they can perform. We need something like Elon Musk's Tesla for websites.

Performance is twofold: it's about designing websites excelling from both a technical and marketing point of view.

We need websites that are fast, scalable, SEO-ready, and widely accessible in any network condition. Hosting slowly-changing websites should be virtually free and thought out to have a low carbon footprint while giving the opportunity for the administrator to make content changes as needed.

At a marketing level, websites have to be designed using a copy-before-code approach: websites are communication mediums, so the content has to come before the design phase. Content has to be monitored and iteratively edited to increase conversion rates.

Owning to these two points, website makers need two sets of skills to make a living addressing the current key challenges of the industry.

Climate Change

Craftsmanship as a Tech Business Model

In a near future, tech companies will be divided between Google-like megacorporations and small tech businesses.

IT is getting increasingly distributed. With the rise of micro-services, some products that would have taken a whole team to make 10 years ago now takes one developer to implement. Sometimes you don't even need to code anymore. Code and domain expertise are becoming a commodity.

Businesses will have to grow big or keep getting leaner.

On the other hand, people are getting tired of huge companies. There is a need for ethical entities that can be trusted. This is where small businesses come in.

I am not merely predicting it. It already happened in the agriculture industry. Local and/or organic food are making a come back. Individuals are tired of being mindless consumers, and it's affecting the biggest companies as well: the offering is evolving to include more responsible products.

People always prefer meaningful transactions over faceless corporations. This is one of the aspects where small businesses win.

Consequently, going back to the craftsmanship model is a real opportunity for tech companies to survive and strive.

Customer care, rather than call centers.

Speed of execution, rather than heavy processes.

Authenticity, rather than corporate brands.

Progress, rather than the sole pursuit of profit.

Ethics and engineering

Engineering is not just about technical mastery, ethics play a huge part as well. No matter your engineering field, you are going to make moral choices. Science and technology are powerful tools impacting our daily life. With great powers come great responsibilities. If physicians have the Hippocratic Oath, engineers need to make one for themselves. There is no such thing at a company level of course. Should we give up our values for a monthly paycheck and great benefits? I hope not. I do not want to. This is something you have to think about and discuss right from the start. Being stuck in a toxic environment is deadly to the soul. To me, becoming an entrepreneur roaming the world is a way to keep my moral independence. I do not abide by dark patterns. I want my creations to be purely helpful. To benefit mankind through the mastery of my craft. It doesn't mean what I do is perfect, but I'm always striving to improve. My choices are mine. It is my duty to be as transparent as possible and to seek the truth. Tomorrow I'm officially graduating from engineering school. Tomorrow I'm going to write my own oath and swear on it.

Faster Internet

I've lived most of my life without a fast internet connection. It taught me a lot about patience, self-control, and the virtues of slowness. But now that most of my work happens online, I'm often constrained by my network: slow upload times, unusable websites, unstable video calls... it's not great.

My current theoretical throughput is 1.4 megabytes per second, but in reality it's closer to a measly 100 kbps. Imagine the hassle when I need to connect to a remote server using ssh to fix bugs or upload new code without downtime: oh the anxiety!

The location where we live is buried in the French countryside, so the telecom infrastructure is under-developed. Since I'm probably going to have to stay for several months at my parents' house until the apocalypse settles down, I took upon myself to find a solution.

It's not the first time I try. We are already paying $50 per month for a crappy connection, so we can't afford much more. I looked for alternative ways to improve our Internet access, either with satellites or mobile networks, but I never found a viable solution. Until a week ago.

The French government currently offers a program for under-served regions: you buy a 4G box from an Internet service provider to replace your traditional ADSL Internet access, and part of the cost is financed by the public administration. Box, antenna, installation, everything is taken care of. We pay the same monthly price, but the connection speed is increased twentyfold. It's no fiber yet, but it's enough for our usage.

The only problem is that we are limited to 200Go per month for a family of four, so we still need to be careful. We can't watch Netflix all day long, for example. We aren't big consumers of streaming fortunately, but we do watch Youtube a lot , and I won't be able to live-stream every day. You can pay for additional data though, $18 for 100Go, and I'm okay to pay this extra cost if need-be.

The thing is I have no idea how much data I use in a month, so it will be interesting to learn more about my digital footprint. Overall, I'm excited to enter a whole new world of virtual possibilities.

Finding Meaningful Work

What do I want to do later when you grow up? That's a question that obsessed me early on, from primary school at the very least.

Children go through phases: they want to be a firefighter one day, then a police officer or a youtuber the next one. I'm lucky to have parents who raised me without trying to prompt me into a career path. I was free to make my own choices, which triggered my will to be proactive in my search for a vocation.

I made a pact with myself: whatever the job I end up doing, I will love it. I already developed the intuition that you can make up your own meaning. What you do in life doesn't matter as much as where you're headed, and any direction is fine. You are free to choose the meaning of your life, there is no definite answer.

We are in an age of entrepreneurship where you have the opportunity to create your own meaningful job. Just take charge. The concept of Ikigai is outdated: you can combine any interest you have to come up with an activity you can be paid for, so don't limit yourself to what others think is a profitable career path. Dare to experiment, dare to spend time on side-projects you care about.

If you love dancing, or painting, or whatever career path your father figure says it's not realistic to pursue, just find a way to be paid for it. It's not always straightforward, you have to develop cross-domain interests to come up with an original business model.

Let's say you love sculpting. Your ability to get paid for what you do stems from your perceived mastery of your craft: what is it you have to offer and how much is it worth? Find a way to increase your perceived value by communicating about it: create info products, write about it, make videos, record a podcast... Anthony Bourdain is a great example: in his own words, he is an average chef, but he found a way to bring his art to the masses to make a living by combining his eccentric persona, his curiosity for travels, and his past experiences as a chef.

Your entrepreneurial spirit is your best asset to find meaningful work. Don't be afraid to use it.

Frankenstein Monsters

How to learn good programming practices? Create Frankenstein monsters: patched code so ugly it becomes a pain to maintain.

Mental pain is the best teacher. It forces us to find a solution, and fast.

I made a Content Management System for an ex-girlfriend's portfolio website when I was 21. It was coded in PHP. I had heard about the Model-View-Controller design pattern while I was studying software architecture in college, and I wanted my CMS to implement it. The codebase was so awful it still hurts, but it taught me a lot about software development and I landed an internship in Geneva when I mentioned it. I had to adapt, so I started learning a professional MVC framework called Symfony to do all the heavy lifting for me. My productivity had easily tripled, and so did the quality of my code.

Last year I launched a web app called 200 Words a Day using Symfony and the historical Javascript library JQuery. 200WaD eventually became a complex application and JQuery wasn't doing the trick anymore. I had to learn a better way to write front-end code. That's how I got into React, and a few days later I started migrating the whole front-end to React. The code was much more modular and thus easier to maintain and grow.

Last week I started hitting the limits of what I can do with React alone. I made this editor tool using DraftJS and React, and even though my code is modular (as much as it can be anyway), the parent app file managing all the states is getting too big to maintain. This is why I'm now learning how to break it down into independent components using a Redux store.

Do things even if they don't scale, and refactor when it becomes too painful. Next time you attempt a similar venture, you'll get a better feel of what you have to do. All great engineers have their own Frankenstein monsters, so don't be afraid of making broken things: always prioritize learning experiences, and remember there is always probably a better way to do things, so shut down your inner perfectionist and start hacking.

Getting into Programming

I started programming as an autodidact at 13 trying to build my own role-playing game forum. Sheer curiosity fueled the passion, it got me hooked to the craft.

I ended up graduating from college with a major in software. Engineering school taught me the basics, how each concept fits together. More importantly, how software quality is defined, evaluated, and consistently produced - which is what separates a hobbyist from a professional.

Both formal and informal education has pros and cons. Formal education is no longer a necessity to work in most companies. Its most essential aspects can be replaced by personal practice, books, online courses, or support communities. This post is an attempt at teaching you how to go about learning how to program.

Learning how to program is half learning how to code, half learning how to optimize your code for humans and machines to process it. Once you understand that, it becomes clear learning how to program is first and foremost a quest for quality: programming is a search for beauty.

Learning programming is thus similar to an artisan's apprenticeship. You need to integrate many tools and concepts in order to reach a high level of mastery. Each technology you are bound to use in professional settings serves this quest for technical mastery. For example, Git versioning addresses the challenges that come with the collaborative environments all developers are bound to take part in. The mantra of the software developer is continuous improvement.

When you start learning karate, you probably want to kick someone's butt during your first training session. Of course, that's not how it works: you need the basics first, otherwise, you just end up hurting yourself. More importantly, you need the underlying philosophy - to understand it's not okay to use your powers against the very rules your practice is based on: pacifism in karate (undoing and avoiding violence), or excelsior in programming. Now we understand what programming is about, we can proceed to learn the basics.

We all seek knowledge because we are expecting something from it. The way you learn must reflect the inner reason why you got started in the first place. Learning is thus a constant shift between a macroscopic (the end goal, the bigger picture) and a microscopic (an atomic element of knowledge) scale. Depending on where you stand, you need to take a pragmatic (tutorial) or a theoretical (compendium) approach, or something in between (handbook). The quicker you strike a balance between the two, the faster you can start developing a "passion" for the subject - because you created an action/reward loop.

One thing to understand about programming is that the language rarely matters. PHP, Javascript, Java, Python, Ruby... spending too much time wondering which choice is better is futile. The reason is quite simple: most "mainstream" languages follow the same paradigm mixing imperative and object-oriented programming. Once you understand a concept in one language it becomes easy to adapt it to another. It's true for basic elements of programming - loop structures, conditional statements, etc. - but also for more advanced and subtle concepts, such as design patterns explaining how good software is structured: if you know how to use one MVC framework (Symfony), you can quickly grasp the inner workings of similar tools (Laravel, Ruby on Rails...).

... to be continued

Green Web

I've been thinking a lot about the possibilities offered by the new web technologies‚ÄĒJAMStack, service workers, etc.‚ÄĒ that appeared over the last five years, and I see a huge opportunity in applying those technologies to surf the Green IT movement.

More specifically, I can picture a new wave of digital agencies or products focusing on greener web technologies.

Developing more sustainable web applications is not only an ethical argument, but it's also a huge economic opportunity to create faster and richer websites that scale while costing much less time and money.

Wordpress powers 35% of the web and the internet produces as much CO2 as the global aviation industry, about 800+ million tons of CO2 per year. Since web technologies remain crucial in the fight against climate change (remote work, instant communication, education, etc.), I don't think the usage is going to decrease any time soon.

Switching to green IT could literally eliminate millions of tons of CO2. One million tons of CO2 is about one billion pounds of coal burned, so even though this is not much, it's still significant.

Now, I'm not the first to see an opportunity‚ÄĒGreen IT is not a new thing‚ÄĒbut I'm incredibly disappointed by how little innovation I can find from a single Google search.

I only found a single interesting resource so far: "What is Sustainable Web Design?", a manifesto written by Tim Frick, author of O'Reilly's Designing for Sustainability and CEO of the green web agency Mightybytes. Nothing else. But even then, I can only dwell on the fact there is a lot of wasted potential: no mention of static-generated websites, no progressive web apps, and no research regarding low-tech alternatives.

Not sure where the rabbit hole will take me but I'll let you know.

Hackers and Pianists

Back in high school, I had a crush on this incredibly smart girl whose parents emigrated from Maghreb. We were the two top students in our classroom, but her grades were slightly ahead of me. Up until 9th grade, I used to be the major of my promotion. Finding someone who could be me at this game was intriguing, I wanted to know more about her. Na√Įvely, I tried to engage with her by asking for book recommendations, and a few months later I confessed my admiration for her. I received a big fat No and moved on with my life, but I got to read two great books: Jane Austen's Pride and Prejudice, and Body and Soul by Frank Conroy. Body and Soul is an apprenticeship novel featuring Claude Rawlings, a piano prodigy. We follow the character throughout childhood, his harsh training to become a pianist under the wings of several maestros, up until adulthood.

One detail that particularly struck me is how he trains his body to increase his technical proficiency. Professional pianists follow an "off-bench" workout routine to develop their upper-body, balance, and hand-eye coordination. It's part of the art.

I was dating a Saigonese girl I met on Tinder the other day, and she asked me if I played the piano. My fingers are long and thin, so she assumed I might be playing the piano. I just answered I'm a laptopianist. A few moments later, a thought started emerging: can I become a better programmer if I were to train like a pianist?

Playing the piano is a lot like programming. You need hand-eye coordination, muscle memory, swift hands trained to withstand long hours of work, and good typing skills. If you do not take care of your body, you cannot perform at full capacity.

I wonder if I can come up with a training regimen to increase my productivity, targeting specifically the aforementioned skills. It's quite usual for professionals to increase their typing speed (WPM/CPM-based typing tests), but what about the other areas? Let's see.

Hard Coding

I spent the whole day working on a new feature and failed to deliver. It's one of those days where I bump into too many bugs to count. I'm frustrated. I have nothing to show for my efforts.

It happens. Sometimes, the problem is too hard to solve in a single day.

The key is to have a good night of sleep and let the solution come to you.

When you focus really hard on a single problem for several hours, your brain becomes as rigid as a stone. It becomes difficult to think. I feel grumpy. I'm full of doubts and I'm questioning myself. It's like swimming in mud, and I don't enjoy it.

I might feel bad, but deep down I know serendipity will knock at my door soon enough. It's a matter of hours. While I sleep. While I try to fall asleep. Maybe tomorrow, or perhaps the day after. Because in the end, if I give it enough time, there is no technical problem I cannot break.

The hardest problems are also the most satisfying to break. It's hard to describe the feeling of accomplishment. It starts with a sigh of relief and ends with a celebration dance. The joy is hard to contain. The energy has to be released. You can refill your cup and have a snack, you earned it. If it's late, you earned the right to bring the prey back home and enjoy the fruits of the hunt with your loved ones.

We live for those moments. We live to solve problems and do hard things. It's the closest thing we have to transcendence.

Helping fight climate change with my work

I still have much progress to make, but I've been improving my carbon footprint a lot in the last two years: I use less transportation (no car, no commute, and cutting my air travels by at least 50% in 2020), live from a backpack, cut on meat and alcohol, and wear warmer clothes to reduce my energy consumption. The next step would be to move toward a zero-waste lifestyle.

I've also started decarbonizing my online activities by switching to a green web hosting provider two years ago, decluttering my online presence, and adapting my tech stack to be more environmental-friendly.

I learned a lot in the process, so I decided to launch a website focusing on the green tech niche called Permatelier.com, once I finish Bouquin's MVP.

I want to set it up as a minimalistic blog detailing technical solutions to reduce the Internet's carbon footprint, starting with web development. I would talk about things like low-tech web servers, static websites, JAMStack development, and progressive web apps.

As I already wrote, switching to green IT would eliminate millions of tons of CO2. All it takes is for web agencies and developers to understand the economic benefits such change would bring, and I intend it to prove that with facts and concrete business case studies by adding green web development services to my tech content marketing business Writelier.

March is going to be a power month. I'm moving to a youth hostel in a 12-bed dorm and bought a 24/7 coworking space membership to close all escape routes. It's not going to be easy, but I think I'll need that to truly enjoy the following months biking in France and Scandinavia.

how i started as an indie maker

read levelsio

1000 fans => blew my mind

How I Teach Web Development - Part 1

People tend to mystify technology. The teacher's first objective is to lower herself to the student's level of understanding, as to turn magic into reality.

What's hard to chew should be progressively broken down and made simple to digest. This is why I always go for a top-down approach, starting with the reasons why we do things before going deeper into the intricacies of the craft.

If learning is about empowering, I'll always make sure to design my course as to help students do more at each of our meetings. No time should be wasted on details that do not bring any practical value: I'll let them fill in the voids through self-study.

Owing to this point, the first lesson I give is always about showing how easy it is to build a website, independently of your background. I like to begin by explaining what's the Internet and how the web works from an end-user's perspective. It's possible to build a minimal website in half an hour while describing the core concepts behind the World Wide Web.

After introducing each concept, I go over the cutting-edge tools that will actually help get things done. For example, I'm currently teaching HTML/CSS. Once we go past the basic mechanisms, we will cover TailwindCSS and Git, before diving into Javascript. Strong foundations are essential: learning web development, it's easy to get lost in the jungle of new technologies. Without being too dogmatic, it's important to give tools for the students to perform at a professional level right away‚Ā†‚ÄĒeven if it's something as basic as a CSS framework, it helps cutting down the work costs of any engineering team.

Humanized Tech

I see a lot of former engineering students moving away from more technical positions to work as managers or consultants. The same thing happens in the software industry.

The pay is nice. You don't have to work outside in the rain and the cold. There is free coffee in the open space, and even snacks sometimes. The environment is always changing, but there is no lack of professional opportunities.

It can be an ungrateful job though. A complex feature requires skills similar to that of a painter, but you don't get to put your name on it. I don't think I'm being overly dramatic if I say there is a lack of recognition for what software developers do.

We do it because we love the thrill of solving problems, not because we want fame. And yet, I can't help but feel like developers tend to be dehumanized.

Developers carve stones to erect cathedrals, but we only remember the patrons and architects of the world. Software engineers have yet to shift from stone-carving to sculpting, from laborers to artisans, or even artists.

I wish for the new wave of makers, indie hackers, and no-code creators to bring a new life to the industry, a sort of tech humanism where developers and users come together at the center.

We don't need more bullshit jobs. We shouldn't be afraid of staying in more technical positions and getting our hands dirty. We need more people to become proactive, to start making things together and get recognition for it, independently of who we are, where we are, or where we come from. Go out there!

I never want to work for a big company

your 20's = years where you should go full gargantua and learn as much as possible

Imagining an Ethical Web

What would a manifesto for building ethical websites look like? I've thought about seven items so far:

  1. Empowerment over Enslavement: Remove the use of dark patterns to hook the user into doing less and becoming the product.
    Examples: remove misleading newsletter registration settings, inform about addictive behaviors
  2. Community-first: Support the development of strong sustainable bonds between the users rather than ephemeral relationships. Use competition sporadically to inspire and create excitement.
    Examples: add the possibility for users to interact with each other and outgrow the limits of the platform they are in, remove implicit power statuses
  3. Substance over Virality: Time is limited. Tech companies should be more mindful of that and actually deliver more signal than noise to their users.
    Example: change propagation algorithms to actually reflect the interests of the user rather than what is viral (looking at you Facebook and Youtube)
  4. Privacy-focused: Ensure a strong emphasis on ethical privacy and data policies, in the best interests of your users.
    Examples: don't sell data to third parties, don't extract data you don't need to serve your users
  5. Business transparency: Clearly and publicly state your business metrics and objectives. Develop systems for public accountability.
    Example: Open Startup Movement
  6. Offline-first: Reduce your website's energy consumption by implementing offline-first principles.
    Example: Progressive Web Apps
  7. Green hosting: Use web servers and cloud solutions powered by renewable energy.
    Examples: A2Hosting, GreenGeeks

N.B: A philosophical question to answer later: Does every app deserve to be built?

Introduction to the Five Principles of Sustainability

Dr Michael Ben-Eli from The Sustainability Laboratory defines sustainability as a "dynamic equilibrium in the process of interaction between a population and the carrying capacity of an environment, such that the population develops to express its full potential without adversely and irreversibly affecting the carrying capacity of the environment upon which it depends." To achieve this balance, he introduces the five principles of sustainability.

The interaction between humans and their environments can be broken down into five domains: the Material Domain, the Economic Domain, the Domain of Life, the Social Domain, and the Spiritual Domain. Each principle is associated with exactly one domain.

The First Principle proposes to "contain entropy and ensure that the flow of resources, through and within the economy, is as nearly non-declining as is permitted by physical laws" to reach sustainability in the Material Domain. The Material Domain constitutes the "basis for regulating the flow of materials and energy that underlie existence".

The Economic Principle is as follow: "Adopt an appropriate accounting system to guide the economy, fully aligned with the planet’s ecological processes and reflecting true, comprehensive biospheric pricing." The Economic Domain provides a "guiding framework for defining, creating and managing wealth".

The Third Principle, attached to the Domain of Life, is about "ensuring that the essential diversity of all forms of life in the biosphere is maintained". The Domain of Life provides the "basis for appropriate behavior in the biosphere  with respect to other forms of life".

The Fourth Principle applies to the Social Domain: "Maximize degrees of freedom and potential self-realization of all humans without any individual or group adversely affecting others." The Social Domain provides the basis for social interactions.

The guiding principle of sustainability in the Spiritual Domain states that we have to "recognize the seamless, dynamic continuum of mystery, wisdom, love, energy, and matter, that links the outer reaches of the cosmos with our solar system, our planet and its biosphere, including all humans, with our internal metabolic systems and their externalized technology extensions. To embody this recognition in a universal ethics for guiding human actions". The Spiritual Domain identifies the necessary attitudinal orientation and provides the basis for a universal code of ethics.

Limiting beliefs in the tech economy

You don't need to know how to code to launch a tech product. You don't need a network to make an app. You don't need experience to start a business. You don't need much money to create your own venture.

Limiting beliefs don't discriminate.

Colleagues, school, friends, lovers, family... they come from anywhere.

But we are in an age of digital wild west. For every problem, there is an opportunity. A cyberspace where the brave will prevail. What you need is a thirst for truth, that which is in accordance with reality. People who strive best in today's economy are the ones doing hard things at the fastest rate. An ability to both learn fast and execute well.

Teachers are plenty, both proteiform and substantial. Experiences are a currency. Online communities are the new universities. Personal branding is the new resume.

Digitalization has pros and cons, yet it stays a tool whose sole purpose is to amplify mankind's deepest needs and desires. Nietzsche predicted "the slow emergence of an essentially supra-national and nomadic type of man, who, physiologically speaking, possesses as his characteristic mark a maximum of the art and power of adaptation". Digitalization makes reality negotiable. Once you internalize that, you can start breaking your limiting beliefs.

Master Programmer

As a beginner developer, focusing on becoming a "10x engineer" is the wrong approach to get better at the craft of programming. You don't want to produce ten times what other engineers do, you have to make your own path.

An amazing programmer is a master programmer, a developer who can do great work. That's our ideal: we may or may not reach it, the choice is ours to make every day. In software, you do great work by knowing how to learn how to code anything, with others. It's about being capable of contributing to masterpieces.

You don't need to know everything by heart, but you can't afford not to know how to obtain the information you need.

As long as you know how to divide to conquer, you can do anything. Being able to break down a problem into smaller ones is the essence of programming. It implies a general understanding of the field and a deep comprehension of the overall problem at hand. Knowing what you need to know is 50% of the work.

Programming is social. You collaborate with tons of people: other developers, end-users, copywriters, marketers... the list of stakeholders goes on and on. You want to make things for your own intellectual satisfaction, but never forget you mainly do it for others.

Programming is incredibly diverse: from management and business to theoretical computer science, there is no one path you must adhere to. A good hacker follows her interests to benefit others. Just dare and make something personal.

New Green IT

Possibilities for greener web development workflows are now many, but we lack the tools, frameworks, content, and products to make it easier for developers to act upon them. I'm extremely disappointed in the current state of "Green IT", because it fails to take into account modern software architectures like JAMstack, single page applications, new browser and database technologies, and even sometimes blatantly ignores the resource costs of software like Wordpress (a charcoal factory, if I've ever seen one). Green hosting and image compression just aren't enough.

Open-Source and Innovation

If it weren't for the open-source movement, most projects would never see the light of the day. I would never see myself working on my own products without the help of Github and its public repositories, for example.

In fact, the most important components of an app are often open-sourced, and some of the most cutting-edge technologies are only available as raw git repositories with very little literature: open-source is what drives modern innovation.

I could go as far as stating that open-source contribution is akin to charity. Even more valuable, sometimes. Brilliant people give their time and energy to build cathedrals that often go unnoticed. It takes tremendous efforts to manage a software project, let alone a whole community, and many open-source projects aren't monetized.

It's pretty rare to have companies build and maintain repositories from scratch, because there is no easy way to make a profit from it and engineers are expensive. It's a long-term investment. For some maintainers, it's a weekend or after-work thing, and it takes a whole other level of courage to do that.

Getting involved in open-source is the first thing I'll do if I ever make it to profitability. It's primordial for the advancement of humanity, and I would forever feel ashamed if I was somehow unable to give something back.

Personal Web Design

One glance is enough to recognize a Picasso or a Van Gogh, but what about web designs?

A website is a canvas full of possibilities. And yet, we often see the same boring landing pages everywhere.

It's probably for the better, a standard website architecture makes it easier to navigate and understand. UX design is a domain where experiments and hypotheses have been formulated for decades: we could call it a science.

Web design is at the frontier between art and science, just like cooking or photography. And in a world dominated by technology, the balance shifts toward the latter.

What if it wasn't the case? What if we could recognize a web designer, or any maker, by its designs?

I like how some makers add a signature at the bottom right corner of their apps. Makers are not much different from painters. They are artists, and you can't be an artist without being an artisan. Not being a corporation is a strength, we should be proud of our humble craftsmanship.

How would one go about creating his own web design "style", then?

I think we could mimic how artists do it: by picking ingredients from different sources, cooking them together, and adding new flavors, rather than following a recipe. Instead of downloading a single CSS library, why not make your own by borrowing UI elements from your favorite designs? Everybody has different tastes and goals, so why should I use one UI kit over another?

The same goes for utility-first approaches: come on, do I really need all those class names and all these dependencies? Minimalism is the answer to most design problems, so why do I feel such a heavyweight on my back?

The remaining question is: what is my style? And how can I make it clean and usable? I guess I'll never know if I don't make something first.

Problematic

I do not wish to raise kids in a world where nothing grows. I do not want to spend time on trivial work living paycheck to paycheck that only adds to the problem. I do not see myself having a future in a land torn by floods, tornados, droughts, and social tensions fueled by an ever-growing population packed in increasingly smaller containers. I want to wake up every morning with an inexhaustible will to live and create, knowing I'm doing something positive.

Modern web experiences for businesses and individuals‚ÄĒwithout compromising on performance and the planet.

Climate emergency is the n¬į1 problem of the century. The Information Technology industry consumes 10% of the world's electricity, while emiting as much CO2 as the aviation industry. If the climate emergency is the number 1 issue of the century, it's the business owner's duty to ensure resources are best used. Performance is not a nice-to-have when the IT industry consumes more electricity than the aviation industry, while emitting just as much CO2.

With circular web development practices, we can decrease the consumption of most websites and web applications by 10, 100, or more! With the right programming languages and software architectures, I found we can divide by 100 the carbon footprint of most websites and web applications to address this long-term problematic.

Problem: few organizations and individual developers address this problem while taking into account modern ways of designing and writing code.

My new mission as an indie web developer is to shrink the carbon footprint of the Web, by making and fostering greener, faster, and fairer web technologies.

Product Country

We can think of a tech product as a sum of smaller products with a common core product. Github is a graphical interface for Git, embellished by a social network of developers, a static web hosting service, tools for Continuous Integration...

A tech product is not atomic, it's a molecule of features. A country with a capital city, secondary cities, towns, and inhabitants.

Building a successful product is probably a lot like building a successful country, except you don't dwell on petty politics.

You want people to love your country so that they can settle down and help you grow it. That's user acquisition and user retention.

More importantly, you must build your cities so that they can complement each other. It's especially important for indie makers to understand. When Pieter Levels built Nomad List, he augmented it with Hoodmaps and Remote OK - three atomic products reinforcing each other in the Nomad niche/country. Marc Köhlbrugge did the same with his Maker country: WIP, Startup Jobs, and BetaList.

Focus on one core product, build satellite products for increased synergy. Never forget the people are sovereign: they are the ones who make or break a product. As a founder, lead the way to sustainability by eating your own dog food.

Programming is Writing

Programming is not merely an engineering discipline. It is about writing for machines. It is about writing for humans too.

A software engineer should put the human first, then the machine.

Human first, meaning, with the stakeholders in mind. It doesn't matter how well engineered a program is if it is unreadable, or worse, worthless.

A good program is like a good blog: modular, atomic, and simple. Like in any good writing the key is simplicity. If you need fancy words to sound impressive, your writings are probably lacking. Similarly, using cryptic lines of code won't impress anyone. Readability is way more important. Great code is self-explanatory.

When I read great software documentation, I feel more eager to learn and use the given technology. Software without documentation is bound to be maintained by one or by none.

Programs are like pieces of art as well: they are fundamentally useless. It doesn't mean they have to be worthless. There are few worthless apps. A programmer has a duty to solve a problem, to make the users feel emotions. Positive ones preferably.

To get good at writing you need constant practice and iterate over what you produce. Programming is comparable.

Finally, what is a programming language? It is a static bag of words, a vocabulary of its own. We can view programming as a writing genre.

Programming: Past and Future

I stumbled upon an old Github repository of mine yesterday. A machine learning research project written in C that classifies time series. As I was going through the dusty code, I was reminded of my beginnings as a programmer, learning C in middle school.

C is still taught in college since it powers the majority of the tech world, but it's not considered as cool as it used to be. Modern developers learn Python, Javascript, or Ruby, because that's what modern companies code in.

The development time is much lower, because you don't have to do low-level things like defining variable types or allocating memory usage. And since developers are more expensive than less efficient code, we prefer sacrificing performance for readability, speed, and maintenance. It's understandable: software development is already hard as it is, so why bother introducing more bugs and more complexity?

But if we look at it in terms of performance, strongly-typed programming languages have a few non-negligible advantages. Compared to C, Python is a huge gas plant. When you look at it in terms of accessibility, energy consumption, and cost savings, low-level languages are much more resilient.

C is probably not going to make a comeback as a mainstream enterprise programming language, but I'm very excited about the new generation of low-level web technologies like Go, Web Assembly, and Web Components. Frameworks like React or Vue are great, but what if we didn't have to spend time and energy downloading and executing heavy Javascript code to run rich websites? We could greatly expand the possibilities offered by the Internet.

For now, the technology isn't ready to disrupt the last decades of web development, but there are hybrid solutions. Technologies like Google's Accelerated Mobile Pages, Svelte 3, or this list of opensource Wasm projects, are good examples. Exciting times to be a developer!

Project-driven Web Development Apprenticeship

If you want to learn to code, you have to understand programming is a craft.  What you know doesn't matter as much as what you did: a potter is not judged by his knowledge of the clay, but by the work he displays on his shop's shelves. An efficient software apprenticeship is project-driven.

The focus of programming is not code, it's the stakeholders' needs: your colleagues, your users, and you. Learning to program is learning to identify and satisfy those needs through well-written code, just like writing is not merely about putting down words on a piece of paper.

In other words, programming is about understanding and solving problems, it's deeply entrepreneurial. If you just got started, you probably don't have any user you can confront your skills to, so start with you: be your own user.

I find it incredible that most computer science majors don't have a personal website upon graduating. Your personal website should be your home on the Internet, the place where you invite guests to have a great time and get to know you. A Linkedin profile is nice, but it doesn't exactly allow you to let your personality shine through code.

A personal website is a great project to learn how to code. It is useful to create career opportunities and experiment with new programming concepts, yet simple enough to develop a minimum viable version in a day or two.

You begin by learning HTML, CSS, and the fundamentals of Git and web hosting. You can understand a lot about the web in just ten lines of code without having to buy anything. More importantly, you can have a live website to show for your efforts in less than an hour.

Once you get the fundamentals of how the web works, you can use your first programming language to spice things up and get acquainted with the main concepts. Javascript is incredible for that: nothing to install, it works right away from your browser. Play around with small projects, like todo apps or simple games, and upload everything on both your Github account and your personal website.

From there it only gets tougher. The quality of your code is probably terrible, or too simple to be terrible. You have to lift heavier weights. Take on a front-end framework, like React or Vue, and re-implement your website using a static file generator, like GatsbyJS or Gridsome. Reduce the number of lines it takes you to add a feature. Simplify, then simplify some more.

At this point you should be able to add a blog to your personal website and display dynamic content from flat files. It's time to learn to connect external services to your application. It implies learning some back-end skills, such as API design and Model-Vue-Controller development. The things you learned up until now should help you take on a back-end framework like NodeJS and make your first simple server-side services. You'll also learn how to persist data in a database to build software other people can use.

This is very similar to what I suggested to my first intern, back when I co-founded my first startup. He only had one month to learn to contribute to our code base, and he still managed to pull off a useful internal tool to generate legal documents.

Start small, be practical, confront your work to the real world, and build upon it. That's how you learn how to program.

Slow Internet

Most of my boyhood was spent in rural France. I never had the joy to experience high-speed Internet until I entered college. 24 hours to download a video game? Sure did. This is how I learned the meaning of the word patience, and why I'm qualified to understand how it feels to deal with slow web interfaces.

What's a slow Internet connection anyway? It's not that bits are slower to move from one location to another, but that the carrying capacity of the network is different. To give you an analogy, it's like delivering food from A to B with two Ferraris: they are both really fast, but in a slow network you are carrying the food in your arms, whereas in a fiber network you attached a car carrier trailer to the back. Consequently, you will deliver a high quantity of food much faster in the latter case.

It's the same principle to take into account when building services for the slow Internet: the amount of time you spend loading the car and the size of the order are the only metrics you have to work on.

In other words, what you want is a car that's ready to go as soon as an order comes up, and whose load is as small as possible.

This is why static-generated websites, which are generated at build time and thus pre-rendered in advance before users even ask for it, will always be much faster than websites dynamically put together at runtime.

This is also why you have to be extra careful about the weight of your webpages: adding an extra picture can look great, but it's not mandatory if it prevents useful information from being displayed fast.

Software, my Love

I started programming at 13 to build my own role-playing game forum engine in PHP. I failed, but the magic of software stayed with me and I wanted to become a software engineer. I am fortunate I didn't discover programming through the prism of formal education. I had an issue to solve, coding was the solution: it wasn't forced on me, it came to me.

A program is a bag of instructions transforming an input into a worthwhile output. Software allows us to represent knowledge in a different way, sometimes more efficiently, to access it or distribute it. The beauty of software resides in its educational purpose. Education comes from "ex-ducere", meaning, "to guide", toward what's outside. Software is an opportunity for growth, an opportunity to transcend, the ability to help others access a higher level of comfort - freedom from pain.

And this is why I find programming so beautiful. A vision of pure creation. A language coming to life to bend reality.

Humans have a tendency to disenchant the world. If I had to convince someone to study software engineering, I wouldn't define it as pure engineering. Instead, I would emphasize its creative yet practical aspect. Being a great software engineer is being an innovative problem solver.

Something is gonna go wrong

I released new code yesterday and as usual stuff breaks. Something is always going to go wrong: broken features, change-averse users, unhandled edge cases... danger is everywhere.

A launch is not just one moment in time, it's a cycle. And the more you go through the cycle, the harder it gets to overcome Launch Resistance - the fear of breaking stuff that used to work.

Move fast and break things. Developers hear it a lot, and yet, there is always a lingering fear that things won't work out the way you want them to. They rarely do.

I broke a core feature of the website: the writing editor. You can whine and hide, or you can fix it. I chose the latter and released a fix in one day of work. Will it matter in six months? No, so why should I care now? I learned more in one day of breaking and fixing things than in one day of preparing things.

Breaking things is taboo in the engineering world. In school, we are told we need to handle all the use cases, to prepare for every possibility, that we need to be perfect. There is no such thing as perfection in this world, and if it does, it will kill your soul.

As a maker, you can't afford to lose time. Never forget you work WITH users, not just FOR them. Throw stuff at them and they will tell you what they like and what they don't. That's how you fix and improve things at the same time.

What about Unit Testing then? It's important indeed, but never forget you can't prove the absence of bugs, so don't spend too much time testing things programmatically. Confront yourself to reality.

Thanks to all my users for the support, and don't hesitate to ping me if I can help you with anything.

Sustainable Web Dev

After much deliberation, I'm doubling down on the sustainable web development niche. It's something I've only started writing about 10 months ago, but the idea I've been with me ever since I became a digital nomad more than two years ago.

Technology and its usage have always fascinated me, even more so since I graduated from engineering school. I started with simple PHP/JQuery apps, then learned about Symfony, React, NodeJS, Gatsby, Next, MongoDB, and progressive web apps.

To me, the path is clear: the web development industry is bound to shift toward the most efficient tech stacks, costing less to develop with and maintain while driving better user experiences and thus higher revenues. NoCode will become prominent, and for all the rest developers will use more advanced architectures blurring the line between frontend, backend, and platforms.

And that's where sustainable web architectures will come into play, with things like universal web applications, offline-first, real-time collaboration, and green ITC.

In order to take on this challenge, I've completely reinvented my tech stack: CouchDb/PouchDb for replicated databases, ExpressJS and Caddy as programmable web servers, SvelteJS to build lightning-fast web interfaces, and NodeJS to act as a controller. I'm leveraging service workers, web assembly, and static page generation heavily, without losing control, flexibility, and development speed.

My new writing app, Writing Startup, will act as a proof of concept. It's planned to release in January 2021, and from there I'll start providing new services as a sustainable web developer. The domain name sustainablewebdev.com has already been bought and is ready to use. But not today, today it's time to document my research results!

Teaching People How to Program

I got into programming as an autodidact, I ended up graduating from college with a major in software. Both formal and informal education has pros and cons.

Autodidactism fueled my drive, it's what got me hooked to the craft.

Engineering school taught me the basics, how each concept fits together. More importantly, how software quality is defined and evaluated - which is what makes you an initiate, a professional.

Formal education is no longer a necessity to work in most companies. Its most essential aspects can be replaced by personal practice, books, online courses, or support communities.

Learning how to program is half learning how to code, half learning how to optimize your code for humans and machines to process it. Once you understand that, it's clear learning how to program is a quest for quality, an artisan apprenticeship. Consequently, programming is a search for the highest quality: beauty.

You need to master many tools and concepts in order to reach this level of mastery. Each technology you learn serves this purpose. For example, you don't do versioning because everyone does it, you do versioning because development is teamwork and versioning addresses the challenges that comes with a collaborative environment. The mantra of the software developer is continuous improvement.

... to be continued tomorrow

Teaching People How to Program

I got into programming as an autodidact, I ended up graduating from college with a major in software. Both formal and informal education has pros and cons.

Autodidactism fueled my drive, it's what got me hooked to the craft.

Engineering school taught me the basics, how each concept fits together. More importantly, how software quality is defined and evaluated - which is what makes you an initiate, a professional.

Formal education is no longer a necessity to work in most companies. Its most essential aspects can be replaced by personal practice, books, online courses, or support communities.

Learning how to program is half learning how to code, half learning how to optimize your code for humans and machines to process it. Once you understand that, it's clear learning how to program is a quest for quality, an artisan apprenticeship. Consequently, programming is a search for the highest quality: beauty.

You need to master many tools and concepts in order to reach this level of mastery. Each technology you learn serves this purpose. For example, you don't do versioning because everyone does it, you do versioning because development is teamwork and versioning addresses the challenges that come with a collaborative environment. The mantra of the software developer is continuous improvement.

When you start learning karate, you probably expect to kick some people's butt during your first training session. Of course, that's not how it works: you need the basics first, otherwise, you just end up hurting yourself. More importantly, you need the underlying philosophy - to understand it's not okay to use your powers against the very rules your practice is based on: pacifism in karate (undoing and avoiding violence), or excelsior in programming. Now we understand what programming is about, we can proceed to learn the basics.

We all seek knowledge because we are expecting something from it. The way you learn must reflect the inner reason why you got started in the first place. Learning is thus a constant shift between a macroscopic (the end goal, the bigger picture) and a microscopic (an atomic element of knowledge) scale. Depending on where you stand, you need to take a pragmatic (tutorial) or a theoretical (compendium) approach, or something in between (handbook). The quicker you strike a balance between the two, the faster you can start developing a "passion" for the subject - because you created an action/reward loop.

One thing to understand about programming is that the language rarely matters. PHP, Javascript, Java, Python, Ruby... spending too much time wondering which choice is better is futile. The reason is quite simple: most "mainstream" languages follow the same paradigm mixing imperative and object-oriented programming. Once you understand a concept in one language it becomes easy to adapt it to another. It's true for basic elements of programming - loop structures, conditional statements, etc. - but also for more advanced and subtle concepts, such as design patterns explaining how good software is structured: if you know how to use one MVC framework (Symfony), you can quickly grasp the inner workings of similar tools (Laravel, Ruby on Rails...).

... to be continued

Teaching Sustainable Web Development

I'm thinking about creating a business around sustainable web development.

I spent the last few months working on designing better web apps using concepts such as static-generated websites, progressive web apps, and green web development. Now that everyone is locked inside with time to kill, and with the unemployment rate reaching a peak, it's never been more profitable to learn how to code.

The path taken by organizations and individuals to approach web development is archa√Įc. I believe there are more eco-friendly, performant, and accessible ways for everyone to go about it.

This is why I decided I'm going to dedicate more time to create a written course on sustainable web development.

The reason why I'm prioritizing it now is a long story that I'll briefly summarize. I'm stuck in Budapest with three roommates for another month. None of them have remote work skills they can use to make money online, but they have plenty of time to learn. The best I can do right now is teaching them how to build web applications and use this experience to improve Cowriters' new software architecture and exercise my writing skills while creating another income source.

It's a win-win opportunity: I need a break from making digital products (the last few months working on Bouquin were stressful, to say the least), and I want to spend more time offline during the pandemic.

If you're interested in this project, I'll be posting updates regularly, so just stay tuned or don't hesitate to send me a message to discuss the things you'd be interested for me to cover.

The 5 Principles of Sustainability - Introduction

Sustainability, or re-designing the way we live to be more mindful of our environment, is the greatest challenge in the History of humankind. What is sustainability though? To find a solution, we need a common framework for every stakeholder to understand the different aspects of the problem. This is the main motivation behind the Five Core Principles of Sustainability, written by Dr. Michael Ben-Eli.

Sustainability can be divided into five sub-domains: the material domain (how do we manage our natural resources?), the economic domain (how do we manage wealth?), the domain of life (how should we interact with the biosphere to respect all living beings?), the social domain (how do we nurture healthy social interactions?), and the spiritual domain (how do we define a universal code of ethics?).

Still according to Dr. Ben-Eli, failing to address one aspect will result in failure: each domain is interdependent. We need an alignment that is robust to change, a "dynamic equilibrium": "the process of interaction between a population and the carrying capacity of its environment such that the population develops to express its full potential without producing irreversible, adverse effects on the carrying capacity of the environment upon which it depends." The five aforementioned domains of sustainability represent those interactions.

The rest of the document describes each principle in detail and the implied operational activities. Even if each principle is only one-page long, there is a lot to discuss and implement. My objective over the following weeks is to address each page and come up with concrete habits to integrate into my daily life.

The Age of Makership

The age of makership is barely getting started.

Craftsmanship gave birth to a whole new set of businesses with their own distinct codes and sub-cultures. Similarly, makership is a new paradigm.

The means of production are getting increasingly distributed: 3D printing, no-code tools, free online services, mass artificial intelligence... it's never been easier to create wealth.

Today, anyone can make a tech product.

Degrees won't matter as much as having a portfolio when it comes to get a job. It will be expected of employees to have built something on their own - a community, a tool, a website, or more generally an online business.

It's an age where you can create your own unique path in life, independently of your status or place of birth. More importantly, we are free to work with anyone on anything we care about.

Yesterday I received an email from Revolut, my online bank, telling me about their new pricing plan: it's now possible to own a business account for $0. In other words, anyone can build a monetized side-project for less than a dollar month. You can generate a free website without knowing how to code, host it for free on Netlify or Github Pages, and monetize it with ads and sponsors once you release enough content to get decent traffic.

Anyone can become an entrepreneur nowadays.

The Art of Offline Coding

When I was a student, we used to write software on pieces of paper.

While this is terrible, it does is important.

Downloading manuals, RTFM, Google searches

Increased focus, flexibility (not Wi-Fi dependent)

The Future of Web Development

I've spent the last two weeks experimenting with new technologies to improve the user experience on my digital products.

I started with a static file generator called GatsbyJS. I implemented a text editor with Markdown capabilities with a local auto-save feature.

Then I built a toy app using a micro-service architecture: Gatsby on the front-end, a Symfony API on the back, an auth0 integration with JSON Web Tokens, and cross-domain requests.

I'm now building a progressive web app using local storage. It works offline, it's fast, and it doesn't cost anything to host. What's more, it can be installed as an app on mobile AND on the desktop while still being accessible on the web, and the data can be fetched from any uniform resource identifier.

I'm utterly fascinated by the concept. As a teenager, the Internet appeared to me as the land of opportunities. I could build something out of thin air and make it accessible to everyone online in a few clicks. I didn't have a phone and desktop software looked hard to build, so I settled for web development. The shift between mobile, desktop, and web seemed impossible to disentangle.

Fast forward 2019 and progressive web apps allow developers to build cross-platform software that works everywhere. It's the convergence of every developer's dream.

It's still a relatively new technology. Chrome made it possible to install progressive web apps since October 2018. Firefox started supporting PWA installs on mobile last month but it's still not supported on desktop. However, both browsers support offline navigation so it's still possible to make your website available even when the network is down.

Even though the support is still limited, migrating any website to a PWA makes it way more performant thanks to the combination of caching and local storage.

I'm looking forward to releasing my first progressive web app and I don't plan to go back to the way I used to design digital products any time soon. The future is almost there.

The Maker Economy: Learn to Code

The No-Code Movement has its advantages: you can build minimum viable products and proofs of concept in minutes for free, and no-code development platforms are incredibly easy to use. Making a personal website or manipulating databases have become common tasks where programming creativity is not inherently fundamental. Reinventing the wheel is overkill in those use cases, and not everyone is interested in learning how to code. Generally speaking, no-code tools are for people willing to outsource all coding activities.

If you're a tech maker, however, learning how to program will give you tremendous entrepreneurial powers. Notice I wrote "learning how to program", and not "learning how to code". Anyone can code, but programming takes a lot of tacit knowledge.

Each piece of code is a brick: you can stack code together to obtain a wall, a program. There are many ways to build a wall. Some will resist the strong winds, some will collapse at the first breeze. Programming is the craft of building sustainable programs, for machines and humans alike.

And this is precisely the huge difference between home-made quality software and no-code tools: the former is built with both sustainability and customizability in mind, its essence is organic.

Tomorrow I will be detailing all the advantages of learning how to program from an entrepreneurial point of view.

The Maker Economy: Learn to Code

The No-Code Movement has its advantages: you can build minimum viable products and proofs of concept in minutes for free, and no-code development platforms are incredibly easy to use. Making a personal website or manipulating databases have become common tasks where programming creativity is not inherently fundamental. Reinventing the wheel is overkill in those use cases, and not everyone is interested in learning how to code. Generally speaking, no-code tools are for people willing to outsource all coding activities.

If you're a tech maker, however, learning how to program will give you tremendous entrepreneurial powers. Notice I wrote "learning how to program", and not "learning how to code". Anyone can code, but programming takes a lot of tacit knowledge.

Each piece of code is a brick: you can stack code together to obtain a wall, a program. There are many ways to build a wall. Some will resist the strong winds, some will collapse at the first breeze. Programming is the craft of building sustainable programs, for machines and humans alike.

And this is precisely the huge difference between home-made quality software and no-code tools: the former is built with both sustainability and customizability in mind, its essence is organic.

Of course, the learning curve to master a no-code tool is way smoother, because you sacrifice several benefits of making things yourself.

Coding stuff yourself is cheap: you can start running a startup with a few bucks. All you need is a domain name, some elbow grease, sometimes a web server, and you are set. You trade time for money and knowledge. I started monetizing 200 Words a Day along with three other websites for $20 a month. Bubble for professional use starts at $62 per month. Wix for businesses starts at $18 per month, but you also have to get yourself a bank account, pay for a domain name, and you are limited to one website.

Programming is becoming independent. The more you rely on external companies to help you, the more restricted you are: a tightly-coupled solution cannot work by itself, by definition. If your solution provider is experiencing a breakdown or decides to raise its price, you are at its mercy. Using development tools is a creative trade-off: you can choose the design template you like, but you are still limited. The appeal of creative freedom is what got me into learning some code in the first place: I was a 13-year-old teenager who loved joining role-play phpBB forums, and I wanted to build my own forum. I started with phpBB forum generators. I quickly felt limited by the built-in parameters so I decided to learn how to make a website.

Learning enables social aggregation. All successful tech entrepreneurs belong to one or several tribes because humans are inherently social: indie hackers, YC alumni, makers... we all need labels to strive. On the other hand, sharing and contributing is inherent to programming: what you learn is content you can distribute. Knowledge commands respect and recognition, learning how to program is thus a way to develop your network. From an entrepreneurial point of view, developers are extremely interesting because they are educated early-adopters, not afraid - sometimes excited - of trying out new technologies to give valuable feedback.

... to be continued tomorrow

The Maker Economy: Learn to Code

The No-Code Movement has its advantages: you can build minimum viable products and proofs of concept in minutes for free, and no-code development platforms are incredibly easy to use. Making a personal website or manipulating databases have become common tasks where programming creativity is not inherently fundamental. Reinventing the wheel is overkill in those use cases, and not everyone is interested in learning how to code. Generally speaking, no-code tools are for people willing to outsource all coding activities.

If you're a tech maker, however, learning how to program will give you tremendous entrepreneurial powers. Notice I wrote "learning how to program", and not "learning how to code". Anyone can code, but programming takes a lot of tacit knowledge.

Each piece of code is a brick: you can stack code together to obtain a wall, a program. There are many ways to build a wall. Some will resist the strong winds, some will collapse at the first breeze. Programming is the craft of building sustainable programs, for machines and humans alike.

And this is precisely the huge difference between home-made quality software and no-code tools: the former is built with both sustainability and customizability in mind, its essence is organic.

Of course, the learning curve to master a no-code tool is way smoother, because you sacrifice several benefits of making things yourself.

Coding stuff yourself is cheap: you can start running a startup with a few bucks. All you need is a domain name, some elbow grease, sometimes a web server, and you are set. You trade time for money and knowledge. I started monetizing 200 Words a Day along with three other websites for $20 a month. Bubble for professional use starts at $62 per month. Wix for businesses starts at $18 per month, but you also have to get yourself a bank account, pay for a domain name, and you are limited to one website.

Programming is becoming independent. The more you rely on external companies to help you, the more restricted you are: a tightly-coupled solution cannot work by itself, by definition. If your solution provider is experiencing a breakdown or decides to raise its price, you are at its mercy. Using development tools is a creative trade-off: you can choose the design template you like, but you are still limited. The appeal of creative freedom is what got me into learning some code in the first place: I was a 13-year-old teenager who loved joining role-play phpBB forums, and I wanted to build my own forum. I started with phpBB forum generators. I quickly felt limited by the built-in parameters so I decided to learn how to make a website.

Learning enables social aggregation. All successful tech entrepreneurs belong to one or several tribes because humans are inherently social: indie hackers, YC alumni, makers... we all need labels to strive. On the other hand, sharing and contributing is inherent to programming: what you learn is content you can distribute. Knowledge commands respect and recognition, learning how to program is thus a way to develop your network. From an entrepreneurial point of view, developers are extremely interesting because they are educated early-adopters, not afraid - sometimes excited - of trying out new technologies to give valuable feedback.

I've always been excited about science fiction and all the possibilities offered by software. Artificial intelligence, web development, cybernetics... the field is just so versatile. Boredom appears impossible. Learning how to program is not just any skill, you are not simply learning how to paint a wall, it's a set of highly marketable skills you can get paid for. All industries are impacted by digitalization and programming gives you access to these new opportunities. If you love dancing, you can build your own platform to showcase your salsa skills, to create your own brand, or to become a virtual teacher. You can stay at home to take care of your kids or sick parents, or travel the world as a digital nomad, while still interacting with customers worldwide. The declensions are infinite depending on your own interests. Learning how to program becomes more than a skill, it's liberating, it becomes an integral part of your lifestyle.

Economies of scale appear when you become a more experienced programmer. Everything you code should be modular, meaning, capable of working on its own but easy to integrate. If you respect this principle, all code is reusable. If you program a blog engine once, you don't need to code its features again. You can reinject each feature in new projects. Code quality is built by iteration, just like a craftsman learns to forge better swords by learning from his shortcomings and improving over the previous ones, except that with code you can reuse the base materials ad eternam. The resulting boilerplates skyrocket your productivity, and that's how you become a prolific maker: by constantly recycling and improving. On the contrary, reusability is quite restricted when it comes to using no-code tools.

For makers, building a hundred different web applications is quicker with code because your execution skills grow exponentially when you start programming things yourself. And we all know your speed of execution on the long-term is what really matters in the entrepreneurial game.

The Maker Way

We do everything for a reason, and quite often this reason is someone else. Creating tech products is no different: making is inherently social. That's precisely what makes the Maker culture so appalling to me: makers not only acknowledge the fact we build for others, but also with others.

Companies are just people who were led to build things together - goods, products, experiences, emotions. Why are we making a distinction between support, strategic, and operational positions then? It feels counterproductive. If we want to design better products and better services, employees must become makers.

We all would benefit from developing an entrepreneurial mindset, to grow both as individuals and members of different organizations.

The term entrepreneur has a bad connotation. A title only a chosen few can possess. The word maker feels more down to earth: who didn't make something at some point in life?

There is pleasure in doing something yourself. It's an act of compassion you have total control over, that you can improve at. A gift of self-worth. When everything crumbles around you, you still have your mind to create.

Making is probably what defines us as humans. It's universal, there is no civilization without arts and crafts. Embracing our inner maker is a step toward a more fulfilling life.

The Shokunin Model

When people tell me I think too much like a "coder crafstman" and not enough like a real entrepreneur, well, I can't help but feel proud. I certainly don't see myself as either an entrepreneur or a freelancer. Life is too short for labels. My goal isn't to make money while I sleep or to work for others, but I do both. My only concern is to bring out the best in me through my craft to benefit society. The best term to encapsulate how I see my job would be shokunin:

"The Japanese apprentice is taught that shokunin means not ¬†only having technical skills, but also implies an attitude and social consciousness. [‚Ķ] The shokunin has a social obligation to work his/her ¬†best for the general welfare of the people. This obligation is both ¬†spiritual and material, in that no matter what it is, the shokunin‚Äôs ¬†responsibility is to fulfill the requirement.‚ÄĚ ‚ÄĒ Tasio Odate, woodworking shokunin

My goal isn't to grow a business to sell it like you feed a pig for slaughter. It happens that the best way to express myself fully is to make digital products.

Entrepreneurship is a tool to better serve society. A medium, rather than a goal or a status.

In my opinion, focusing on the craft is ultimately what makes me happy and the world a better place.

It's important for the craft to bear fruits, but this is not where the artisan's attention is. If you push your craft to its limits, it eventually allows you to make a living.

I can't help but hate the way we picture freelancers, entrepreneurs, makers‚ÄĒhow quick we are to label them and establish dichotomies. What we should do and what we shouldn't.

Fuck that. Life is too big to fit in a box, and too fast to catch with your own hands.

There is no distinction to make between coding, marketing, selling, writing, or anything else. It's all part of one thing: your craft.

(NB: this is a draft, will get back to it to clear it up)

Thinking Software

  1. write user stories

  2. break down into unit tasks

  3. if stuck, start hacking (drafting)

  4. lay down the outline to estimate how long it takes

  5. an atomic user story shouldn't take more than a week to implement, ship fast and iterate

Toward an art of software development

What is art? Technical mastery, innovation, expression, beauty. Those are the words that come to mind. Art is about emotions, not rationality. About how it makes you feel, rather than how well it works.

In The Pragmatic Programmer, Andy Hunt and Dave Thomas say: "the construction of software should be an engineering discipline", but that "it doesn’t preclude individual craftsmanship". It is assumed by many that software development is purely an engineering discipline.

I do not agree.

"Art" comes from the Ancient Greek word "Technńõ", which implies the technical mastery of a craft. It is quite incredible to understand that artists were in fact, at first, artisans. The border between craftsmanship and art is thin: an artist was someone able to perform a given work at a higher quality level than others. Excellence was the criteria. Artisanal products (such as textiles) could be perceived as much more precious than paintings or sculptures. In a sense, any artisan can become an artist.

On the other hand, the meaning of a word is tied to its historical context, so defining art from its etymology might be self-limiting. Let's have a look at another approach.

In his book Living with Art, Mark Getlein proposes six functions of contemporary artists:

  1. Create places for some human purpose

  2. Create extraordinary versions of ordinary objects

  3. Record and commemorate

  4. Give tangible form to the unknown

  5. Give tangible form to feelings

  6. Refresh our vision and help see the world in new ways

I could make parallels with software for each point above. My key takeaway is that art is about communicating ideas and storing information as well. Roman artists made sculptures of emperors to "store" this information and allow them to access immortality in some way. Art is something you interact with. Not only physically.

Similarly, software aesthetics exists. Paul Graham says it better than anyone:

Hackers, likewise, can learn to program by looking at good programs [...] paintings are created by gradual refinement [...] Great software, likewise, requires a fanatical devotion to beauty. If you look inside good software, you find that parts no one is ever supposed to see are beautiful too. [...] It drives me crazy to see code that's badly indented, or that uses ugly variable names. [...] Most makers make things for a human audience. And to engage an audience you have to understand what they need. Nearly all the greatest paintings are paintings of people, for example, because people are what people are interested in.

I can see many parallels between the Maker movement and how artisans reached the artist status in the first place.

Craftsmen belong to guilds. Artists have patrons.

Engineers belong to companies. Makers strive to live from their own products.

Software development has reached a stage where it is not necessary to be part of a company to make tech products.

Hackers are breaking pre-established rules and express themselves throughout their own form of expression. Just like great artists use art as a catharsis, makers create products to solve their own problems.

Companies become either increasingly atomic or increasingly big. There is less and less in-between.

"Artists ship," says Steve Jobs. So do makers.

So while I admit that hacking doesn't seem as cool as painting now, we should remember that painting itself didn't seem as cool in its glory days as it does now.

Paul Graham

In Ancient Greece, each art form was personified by a Muse. Nine Muses, but none to represent painting and sculpture.

Sculptors and painters were held in low regard, somewhere between freemen and slaves, their work regarded as mere manual labor.

In Our Time: The Artist BBC Radio 4, TX 28 March 2002

Sounds familiar?

What if, instead of marketing software development as a pure engineering practice, we promoted it as an art?

Could it inspire a whole new generation of product-oriented programmers willing to solve important problems?

Developers, start creating. Don't fall into elitism, but be proud of your job. Because maybe one day, the future generations will be able to look at us and see pioneers of a software art.

Training Routine for Programmer - Part 2

In part 1 I'm discussing why I need to develop a training routine to become a better programmer. Part 2 is a draft containing notes describing solutions to the identified problems. Part 3 will be about defining micro-habits to perform on a daily basis.

I. General Hygiene

1. Equipment

An Ergonomic Workstation helps prevent health issues:

A) Increase your exposure to natural light, decrease night time work to avoid relying on artificial light sources.
B) Sit on a comfortable chair with lumbar support and made of an airy fabric (no leather or hard surface, which tends to heat you up).
C) Ergonomic keyboard and mouse reachable without stretching, no more than 20 degrees between your forearms and your tools.
D) Ventilate your room.
E) Screen positioned 50 cm from your eyes. Center of the monitor 20 degrees below eye level.

2. Breaks

A) Stand up and stretch every half-hour.
B) Walk outside every three hours.
C) Have lunch outside.
D) Use a mindfulness bell.
E) The 20/20/20 rule: after 20 minutes of computer work, look at an object about 20 feet away for about 20 seconds

3. Evening routine

A) Remove screens two hours before bed.
B) Use an e-reader to read and take notes using a pen and a notebook.

4. Proper diet

A) Remove caffeine intake
B) Fruits and nuts over junk snacks
C) Plant-based diet

5. Proper sleep hygiene

A) Early afternoon nap if needed
B) Go to bed when your body says so

6. Social routine

A) Go out to meet new people
B) Call loved ones

II. Conditioning

1. Free weight exercise

A) One hour after waking up.
B) Program: StrongLifts 5x5
C) Stretching
D) Diaphragmatic breathing

2. Typing exercises

A) Practice typing on keybr.com

3. Eye exercises

A) Focus change, near and far focus, figure eight [4]
B) Palming, blinking, zooming, shifting [5]
C) The long swing, looking into the distance, exploring the periphery, sunning and skying [6]
D) Peripheral vision training (sticks and straw exercise) [7]

4. Hand-Eye Coordination

A) Switching focus, play catch, juggle [8]

5. Memory

A) Learn the keyboard shortcuts of the Atom editor by heart, review every day
B) Learn the keyboard shortcuts of the Kubuntu desktop by heart, review every day
C) Read, take notes, convert them to mind-maps and memorize them
D) Practice a foreign language every day and memorize 5 words/expressions per day
E) Unplug the mouse, use only your keyboard (you only need a mouse when doing graphic design)

6. Focus

A) 10 minutes of seated meditation per day

Bibliography

  1. The sacrifices we make to our health as programmers, Yoni Weisbrod, Hackernoon
  2. 10 Major Health Concerns For IT Professionals, Crisp360 Editors, Business Insider
  3. How to be a Healthy Programmer, Blazej Kosmowski, Selleo
  4. Eyes Exercises, Corinne O'Keefe Osborn, healthline
  5. Eye Exercises to Improve Eyesight, HDFCHealth
  6. 4 Powerful Eye Exercises for Rapidly Improving your Vision, Meir Schneider, Conscious Lifestyle Magazine
  7. Exercise Your Eyes to Increase Peripheral Vision for Athletics, Dr. Larry Lampert, Stack
  8. 3 Great Exercises To Improve Hand-Eye Coordination, Chiraine Rosina, We are Basket

Training Routine for Programmers

Heavily influenced by the way pianists exercise, I'm doing some research on how to become a better programmer by developing a training regimen.

Exercising is a crucial part of a good work/life balance. Programming time is mainly spent hammering a keyboard: it's a sedentary life with little physical movements.

Bad physical health is synonym with heart disease, thrombosis, and cancer. More specifically, programming is associated with carpal tunnel syndrome (bad wrist posture), vitamin D deficiency (lack of sun exposure), bacterial infections (unkempt keyboards), stress (software development is stressful: crisis management, deadline pressure, computer usage), insomnia (blue light exposure), lower back pain (bad posture), and neck/eye strain (badly adjusted chair and monitor)

Up until now, I never really thought about my health as a software engineer. I work out from time to time because I like the hormonal rush. What if I could align my workout routine with my aspirations as a maker? I'm pretty sure this would finally be a great reason for me to stay consistent with my visits to the gym.

I'm going to write some notes on how I plan to help prevent the aforementioned health issues and improve my programming skills. From there I'll establish a series of micro-habits I'll follow over the next few months. If you'd like to tag along, maybe we can try to experiment with this regimen together.

Web Symbolism

spider = illusion, creativity, aggression, education, evil (trap), craftsmanship, feminine power, transmutation, ancient knowledge, creation and death, words, communication, control, magic over people and things

8 legs = luck, infinity, wealth, 8 bits, byte

web = universe, fate, connection, complex yet strong crafts

Website Carbon Footprint

According to this website carbon calculator, the average website produces about 1.76 grams of CO2 per page view.

Generating a million page views is the same as burning 500 kg of charcoal.

According to the same calculator, my personal website consumes 0.15g of CO2 per page view. Ten times less CO2 than the average website, is cleaner than 88% of the websites tested. And I have yet to move it off Netlify to run on renewable energy.

My website also scores 98/100 on Google PageSpeed (although the new version I'm developing has a perfect score on Google Lighthouse).

If websites were cars, mine would be a Tesla Model S: greener and faster than most.

There is no big secret formula to obtain the same result: the more work you give to the web server, the bigger the footprint and the loading time. If the web server does nothing but listening for incoming requests, the energy consumption is minimal.

When the web was still young, servers were mostly used to send out static HTML files. Then came the need to interact with forms and databases, leading to the birth of programming languages like PHP. This complexity came with a cost.

With the rise of serverless architectures and the app boom, this need can be delegated to third-party tools. We can bring back our focus on the core of what makes a website: content.

What I Love About Programming

When you sit down to code, you already know what the result will look like. This is the main difference between art and programming. A painter figures things out one stroke at a time. However, programming still lets you figure out how to deliver the solution and what it should look like: it's a highly creative job, but it's also a practical one where you are supposed to solve a problem. This pragmatic creative freedom is what I love about programming.

I don't remember the last time I wasn't excited to sit down at my desk to do some coding. I can pick the projects and the features I want to work on, and I can write whenever I want. Nothing is forced upon me, and I think that's the reason why I find it so enjoyable. Even if the project is not mine, I can choose to contribute if it's meaningful to me.

Call me a workaholic if you want, but I also feel a deep sense of fulfillment whenever I'm in my office. The feeling of being in the zone and getting tasks done makes me high. Making innovative features, refactoring and debugging code with nothing but a colorful text editor is gratifying, it's like building Roma or vanquishing Mount Everest from the comfort of an armchair. The task I like even more is automating my workflow. It's like buying a tool chest with futuristic screwdrivers inside, except it's all free and tailor-made for and by yourself.

Programming is also social. I just can't wait to release new things because I can see how it impacts my users' lives, usually for the better. When someone uses your app, even if it's not groundbreaking, it makes your work all the more meaningful. It creates a purpose that goes beyond you, it's transcending. Getting better becomes a raison d'être because you don't want to fail the people who put their trust in you and the ideals you put in your work.

I just don't see myself ever stopping. I can't spend a week without coding something, it's part of my identity. If I don't, I feel bad, as if I lost a part of me. I just have to make sure it never feels like a job, by being mindful of what I do and by having fun while doing it.

What is programming all about?

I started programming as an autodidact at 13 trying to build my own role-playing game forum. Sheer curiosity fueled the passion, it got me hooked to the craft.

I ended up graduating from college with a major in software. Engineering school taught me the basics, how each concept fits together. More importantly, how software quality is defined, evaluated, and consistently produced - which is what separates a hobbyist from a professional.

Both formal and informal education has pros and cons. Formal education is no longer a necessity to work in most companies. Its most essential aspects can be replaced by personal practice, books, online courses, or support communities. This post is an attempt at teaching you how to go about learning how to program.

Learning how to program is half learning how to code, half learning how to optimize your code for humans and machines to process it. Once you understand that, it becomes clear learning how to program is first and foremost a quest for quality: programming is a search for beauty.

Learning programming is thus similar to an artisan's apprenticeship. You need to integrate many tools and concepts in order to reach a high level of mastery. Each technology you are bound to use in professional settings serves this quest for technical mastery. For example, Git versioning addresses the challenges that come with the collaborative environments all developers are bound to take part in. The mantra of the software developer is continuous improvement.

When you start learning karate, you probably want to kick someone's butt during your first training session. Of course, that's not how it works: you need the basics first, otherwise, you just end up hurting yourself. More importantly, you need the underlying philosophy - to understand it's not okay to use your powers against the very rules your practice is based on: pacifism in karate (undoing and avoiding violence), or excelsior in programming. Now we understand what programming is about, we can proceed to learn the basics.

We all seek knowledge because we are expecting something from it. The way you learn must reflect the inner reason why you got started in the first place. Learning is thus a constant shift between a macroscopic (the end goal, the bigger picture) and a microscopic (an atomic element of knowledge) scale. Depending on where you stand, you need to take a pragmatic (tutorial) or a theoretical (compendium) approach, or something in between (handbook). The quicker you strike a balance between the two, the faster you can start developing a "passion" for the subject - because you created an action/reward loop.

One thing to understand about programming is that the language rarely matters. PHP, Javascript, Java, Python, Ruby... spending too much time wondering which choice is better is futile. The reason is quite simple: most "mainstream" languages follow the same paradigm mixing imperative and object-oriented programming. Once you understand a concept in one language it becomes easy to adapt it to another. It's true for basic elements of programming - loop structures, conditional statements, etc. - but also for more advanced and subtle concepts, such as design patterns explaining how good software is structured: if you know how to use one MVC framework (Symfony), you can quickly grasp the inner workings of similar tools (Laravel, Ruby on Rails...).

... to be continued

What's a Maker?

Note: The title of my book, which is announced to be released by March the 15th, is Making a Maker. I guess it's only fair to make a first attempt at defining the term Maker in this post. I will iterate over it later. Consider it a draft.

Makers are people who make things. If we consider this general definition, makers have been around since the dawn of time, and most individuals are makers. But words evolve. They are meant to, because societies change.

In the context of the Maker Movement, a maker is a member of a culture.

Cultures are exclusive by definition. It doesn't mean that cultures divide people according to gender, race, sexual or ethnic criteria. It means that cultures are based on a set of exclusive values, beliefs, goals, and principles. Those values do not pertain to everyone, but to a specific social group.

A Maker is thus an individual who adheres to the values of the Maker culture.

Does it mean we should follow a Maker Manifesto defining a fixed set of values? I believe it would be self-limiting. This is the reason why it's so hard to define the term Maker. Mark Hatch says it well in his own Maker Manifesto:

In the spirit of making, I strongly suggest that you take this manifesto, make changes to it, and make it your own. That is the point of making.

Mark Hatch - CEO of TechShop and Author of "The Maker Movement Manifesto: Rules for Innovation in the New World of Crafters, Hackers, and Tinkerers"

It is thus more relevant to identify the core values of the Maker culture rather than a set of rules to live by.

I will make an attempt at identifying those values in a future post.

What's A Webpage?

In the last post I've written about web development, I came to the conclusion that a website is a directory located in a web server containing web documents.

However, a single category of web document is mandatory to make a website: webpages.

A webpage is a document written in a language called HTML (HyperText Markup Language) that can be read by a web browser and transformed into a human-friendly interface. A minimal webpage looks like this:

<!DOCTYPE html>
<html>
  <head>
    <title>My first webpage</title>
  </head>
  <body>
    <p>hello world</p>
  </body>
</html>

If we were to copy/paste this in a text editor, save it as a .html document, and put it on a web server, we would have built a website titled "My first webpage" and displaying the text "hello world". It's that simple.

As its name suggests, HTML is a markup language (like XML or Markdown, among others), meaning it uses tags to indicate the relationships between each element inside the document.

Each tag has its own meaning.

An HTML document starts with <!DOCTYPE html> to indicate to automated readers that it's an HTML document.

We then find the <html> tag that will wrap the whole document.

The <head> tag contains metadata (data about the data contained within the document‚ÄĒin this case, the title).

The content of the document is placed in the <body> tag. In our example, we have a paragraph tag <p> displaying the text "hello world".

Tags are the building blocks we use to create rich webpages. In the next article of this series, we will see more tags, how to use them, and how they concretely translate into a web interface.

What's A Website?

The web is a network of machines.

Anyone can access it by using a web browser like Firefox, Chrome, or Safari.

When we want to access a document on the web, we use its web address known as URL (Uniform Resource Locator). Each URL corresponds to a specific document: a picture, a video, a PDF file, or a web page, among others.

A URL like https://cowriters.app/search?query=internet is divided into four parts: a communication protocol called HTTPS (Hypertext Transfer Protocol Secure), a domain name (cowriters.app), a path (/search), and some parameters (query=internet).

When you type a URL in your browser's search bar, you are requesting a web document from a machine called a web server.

HTTPS is the protocol used to communicate with this web server.

The domain name allows us to find the network where the web server is located, by using another type of machine called Domain Name Servers. Those domain name servers are able to translate domain names into Internet Protocol addresses (IP address), which allows us to identify any machine on the web.

Once we reach the web server, it will use the path and parameters in our HTTP request to find a corresponding web document. This web document is then sent back to us. Internet browsers can read those documents and display them to us in a way we can read them too.

A website is simply a directory located in a web server containing web documents.

Where I'm heading to, technology-wise

I talk a lot about my projects and my day-to-day life as a nomad entrepreneur, but not so much about my skills as a software engineer. I decided to flip this behavior by starting to release more technical content.

My current stack is Symfony (PHP) on the back-end and JQuery/Twig/Bootstrap on the front-end.

I got into Symfony because that's the framework I had to work with at my first job, and since I have a more back-end background I never got into front-end frameworks and just tweaked my apps with some JQuery.

Symfony is a MVC framework, so I don't really feel like changing it. PHP is a popular language that keeps on getting better, and Symfony is similar to Laravel in terms of architecture and performance. Twig is also an incredible template engine.

I'm slowly taking on React to replace JQuery. It's 2019 and I'm starting to build DOM-intensive projects. React appeared as the most marketable skill I could take on as of today.

I don't want to waste the five years I spent studying calculus in engineering, so I'm also taking on new courses in applied Machine Learning, more specifically in Natural Language Processing. I studied basic Machine Learning during my last year in college and I absolutely love the topic! It's amazing to see all the things you can do with it, and it's giving me so many ideas to help writers at 200WaD: we could create our own minimalistic Hemingway App or Grammarly, the possibilities are unlimited and it's beyond exciting!

In parallel, I'm also taking on new tools to increase my code quality. For example, I'm studying ways to improve my Git workflow and to install Continuous Improvement/Continuous Delivery services.

This is basically where I'm heading to over the next year.

Why I don't buy software tools

My business costs consist of servers and domains. Except for those, I never buy out-of-the-box tools for software development. As a programmer and indie maker, I have several reasons for that.

Disclaimer: if you don't know how to code or if you're managing or working for a company, this post is not entirely applicable.

First, it's never a good thing to outsource parts of your business you know nothing about. Outsourcing comes after you hit a maximum pain threshold preventing you from focusing on your core activity on a frequent basis. Before that, you don't know what the underlying problem is, and you don't know what to delegate to a third party.

Pain and learning curves are similar, but they are not the same thing. Learning is never easy and can lead to suffering, but it's a meaningful pain.

When you're compulsively buying a tool, you're not learning. But your capacity to learn hard things fast is what ultimately makes the difference between you and the competition. Without this knowledge, you are trapped by your own limitations.

Knowledge is leverage. Nobody will praise you because you bought a Mailchimp, but developing an intimate understanding of what makes Mailchimp so useful and how it works can land you jobs later on. You can't acquire this experience without getting your hands dirty.

Money is another argument. I don't think it's a smart move to spend on software without developing some sort of Product/Market fit first. Increasing your financial constraints leads to creative outbreaks and lets you see new opportunities. Reinventing the wheel to give it your own spin is how I get most of my product ideas. Thanks to its low maintenance costs, The Co-Writers became profitable two months after its first release. It's still not enough to make a living from it, but at least it's enough to stay online indefinitely while I improve it. If you double your costs by two, you'll also need to double your customer base to make ends meet.

At a strategic level, it's in your best interests to remove dependencies to third parties. Partnering is always taking a risk. Long-term sustainable business relationships are hard to develop. Great business partners are even harder to come across. Some people are not here to play the long game or simply don't share the same values. Third-party services or APIs can be discontinued, and tacit service-level agreements can be broken. Even big companies such as Facebook or Google can't be trusted. Small and medium enterprises can be even riskier because of the market's volatility.

Of course, beware of all-in-one solutions. Buying a tool is investing in a company: it's never good advice to invest all your money in one provider, you need to diversify your portfolio to make it more robust to change.

Finally, building your own tool is a fun thing to do, and it's a market differentiator if it's well conveyed to your audience. We live in an age where craftsmanship is making a comeback. People tend to prefer buying from local artisans than big faceless corporations. There is beauty found in handmade things, which in turn increases its perceived value. When you offer a gift during Christmas, handmade will always come off as better, even if the intrinsic value is inversely proportional.

Building your own things is not just a matter of pride, monetary gains, and control, it's also about reasoning from first principles to design well-engineered products. That's what makes brands like Basecamp or Ferrari so attractive to customers. Their products are minimalistic and optimized to do one thing well. You can't do that by accumulating layers of complexity.

The only software product I'm still paying for is Makerlog, because what I'm buying is more than just software. It's a founder's vision and a community, not just a commodity or a transaction. I already know how to build a to-do app, but I have nothing to gain from creating my own, even though I use it daily. I have more to lose by not supporting its growth. Most tech products I came across don't meet this requirement.

Why I Reached A One Second Page Speed

The speed of a web application matters more than you might think.

A study published by Deloitte showed that increasing your loading speed by 100ms can improve your conversion rate and your number of page views by 10%, while decreasing your bounce rate up to 9%. If your page takes more than a second to load, 30% of your visitors will leave the website after looking at a single page. 90%, if it takes more than 5 seconds.

The faster your website is, the more business you will get. It's that simple.

In a website like Cowriters, it's even more important to have a low First Meaningful Paint‚ÄĒthe time it takes for a visitor to be able to start reading‚ÄĒto entice readers to stay on the site and find valuable content.

A slow website tarnishes the experience for the person reading, but it's also equally horrible for indexing robots. As Google officially announced two years ago, loading speed is now part of the ranking algorithm. As an online content creator, neglecting Search Engine Optimization entirely is not much different than committing digital seppuku. I experienced it first hand with Sipreads last month: after improving our page speed between June and July, our pages with first impressions have been multiplied by 7 (from 5 to 37), and we increased our clicks by 59% as well as our impressions by 68%. It probably wasn't the only factor, but I'm sure it helped.

This is the reason why I'm extremely excited to have reached a perfect Lighthouse Performance score today with the new Cowriters homepage: 100/100 with a First Meaningful Paint of 1 second and a Time to Interactive of only 2.3 seconds!

There is still room to improve before going below the 1 second mark‚ÄĒusing for example static-generated webpages, service workers, lazy hydration, and a global CDN‚ÄĒbut I'm extremely satisfied with the outcome. It means Cowriters has what it takes to become a premium content management system and publishing platform for everyone, no matter where you are in the world and how slow your Internet connection is. I see a huge opportunity for writers and readers alike, and I won't be afraid to jump on it. Less than 3 weeks before you can test it for yourself! :)

Why Reducing our Web Carbon Footprint Matters

There are more than a billion active websites on the Internet today, and each page view consumes 1.76 grams of CO2 in average. This is huge, but only a fraction of the whole ICT industry, which consumes twice more than the aviation industry. And the electricity bill keeps growing.

If we have to start somewhere, it's with software.

Have a look at your website dashboard and figure out how many page views you get per month. Then use a website carbon footprint calculator to give yourself an idea of your consumption level. These generators are not entirely accurate, but they are good estimates to get you started: multiply your number of page views by your CO2 footprint, and you'll know how impactful the change will be.

In my experience, my web applications and sites are in the 0.1-0.2 grams of CO2 per page view after switching to sustainable web technologies. Writelier had 12,000 unique visitors over the last 30 days, so at least this much in page views (I don't track my page views). In other words, dividing my carbon footprint by ten allowed me to prevent at least 18kg of CO2 from entering the atmosphere just last month.

Even though most of the page views come from the top 100 websites, it still matters. 18kg of CO2 is what leaving a LED bulb on for 216 days straight produces, to give you an idea, and Writelier is a small website. Apply the change to a billion websites, and we would save enough energy to light the world for decades.

Work and Identity

Karl Marx proposed that capitalism alienates workers when they are robbed of both their tools (means of production) and their creations (the results of their labor).

I can understand why. We put a little bit of ourselves in our work, and every artist understands this feeling of giving birth. Giving out our mental womb and baby, not being able to identify for what purpose we work so hard every day, is a crushing sensation.

The opposite is true as well. Over-identifying with our work is equally harmful.

Most people can't stand criticism because they take this feedback too personally. This is probably the biggest obstacle to an individual's growth.

Learning to distance ourselves from our work, just enough to develop the thick skin we need to reject the waste and absorb the nutrients thrown at us, is part of the artisan's spirit.

We can do this by prioritizing the search for continuous self-improvement over an attachment to the outcomes of our work. The artisan takes greater pride in skills than in masterpieces.

After all, few creations last forever: all that remains is how we spend our time on earth. Tying our entire identity to dead objects is absurd.

writing software

book idea

software = solving a problem => marketing => writing

programming language => writing

clarity is key

Principles

150

150, also known as Dunbar's number, is the theoretical cognitive limit to how many people we can maintain stable social relationships with.

Even though this is not an exact number depending on who is concerned, it tells us that we have to be wary of who we let in our lives.

I think it's especially interesting for solo bootstrapped business owners: if there is only so many people we can deal with on our own, do we need a strategy that takes this biological trait of ours into account?

The startup sphere focuses on growth and economies of scale, but most of those traditional principles praised by all sorts of crowds do not apply to indie makers: not all businesses are meant to reach abject heights of valuation or be sold. Some of us just want to earn an honest living while contributing to society, and that's enough.

What if we applied Dunbar's number to business relationships then? A viable business would be comprised of 150 close collaborators and clients, and this would allow the solopreneur to live above a given ramen profitability rate.

If you have 150 clients paying $5 per month for your service, you obtain $750 before taxes. $1500 in monthly-recurring revenue if you price it at $10/month, which is enough for most individuals on Earth to sustain themselves.

It's more realistic  to create a close-knit community of 150 paying high-quality customers than to aim for virality.

3 Ways to Reduce Your Website's Carbon Footprint

There are three ways to drastically reduce a website's carbon footprint.

The simplest one is to switch to a green web server. There are a few web hosting providers either offering machines powered by renewable energy or offsetting their carbon emissions through monetary means. Most web services and web hosting providers on the market aren't environmentally friendly, but there are tools to guide you in your technological choices, such as the Green Web Foundation's website check. Surprisingly, most GAFA companies run on green energy.

Another way is to make fewer, smaller requests to web servers. Not only it is the most effective way to consume less energy, but it also greatly increases your website's performance. The less time it takes for a server to return a response, and the less the load on the network is, the better the user experience. Using service workers and efficient cache policies in an offline-first approach, the amount of energy needed to use an app decreases considerably. It's especially true for static websites whose content is by definition less prone to change.

Last but not least, better content and a better navigation experience are needed to decrease noise and allow users to find information much faster. The faster a user finds what he's looking for, the more he is likely to convert. If a visitor needs to go through 10 webpages to find a relevant article, the carbon footprint will naturally increase.

About Open Source

When it comes to making software products, the closed source model is often preferred for economic reasons. Controlling the source code is controlling its usage and distribution, and thus controlling the ensuing business model. It's a way to reduce competition by preventing imitation, obfuscating your code is also a method to secure your application: in a black box model, vulnerabilities are hidden deeper.

The open source model has different economic advantages. Open source software doesn't necessarily mean free software. Often, companies developing open source products make a living by providing additional services, such as consulting, managed hosting, or educational content.

Open-sourcing your code allows the production of stabler software with less costs. Developers can collaborate together to solve common problems, with little management and on a voluntary basis.

It's also a great marketing opportunity for makers. A product whose release speed is high and whose community remains active is a growing product. The progress made are readily available to everyone thanks to platforms such as Github or Gitlab. This transparency results in increased trust and quality of service.

Collaborators are often users, so the interests between products owners and end-users are more likely to be aligned.

An open-source license is an opportunity to generate creativity, flexibility, and high engagement: anyone can adapt the code to its own needs. It positively impacts the longevity of the source code by bringing more security: breaches are more easily identified by a whole group of people than by a single developer, and the source code can live on forever in a Git repository maintained by the community. Service interruptions are less likely.

Customer support can also be partially delegated to the developer community: bugs can be discussed and fixes can be submitted through push requests.

Source code also has an educational value: obtaining source code is obtaining the solution to its problem, contributing to a better distribution of knowledge, but also the creation of new one.

The parent organization still holds the power to decide how the source code can be used and how it'll change to keep the product consistent with the overall vision. If the organization fails to address the needs of its users, they can fork their own project, resulting in more innovation.

Open-source is a model of distributed innovation.

Agile Principles

The Agile manifesto proposes four principles to craft software:

  • Individuals and interactions over processes and tools: the job must be done as efficiently as possible, so tools and processes should be adapted to the team, instead of the other way around.
  • Working software over comprehensive documentation: working software is the best indicator of a project's progress.
  • Customer involvement over contract negotiation: a system aims at providing values to customers, so customers should be involved in the making through short feedback loops.
  • Responding to change over following a plan: change is inevitable, so adapting the plan to them when necessary should be part of our development methodology.

Even though Agile methodologies have become the de-facto way to produce software, the way they are implemented across different organizations greatly varies.

On a personal note, I use a variation of the Agile Kanban methodology to organize my work.

Combined with the accountability provided by a public Trello board, regular blog posts, and my Makerlog feed, it helps me consistently get things done while acquiring feedback and marketing my projects.

I like this combination because it also gives me an overview of the work I've done and where I'm headed, without being too complicated to apply on a daily basis.

I could still improve my Agile workflow by automating it more using webhooks and APIs. It would be nice to have my Github commits parsed and merged into weekly blog posts, for example.

Bug Down

Programming is a constant roller-coaster oscillating between moments of glory and abysmal bugs.

There are days where you spend hours trying to do something and still fail. It feels terrible, because there is no way to make up for lost time.

But solving bugs is often a matter of persistence: you attack from different angles, reproduce the conflicting conditions leading to the horrendous outcome, and always strive to simplify your code. I have yet to post anything on Stack Overflow, because there is always a solution to work out.

The sour feeling is hard to deal with though. The anxiety, the powerlessness, the inability to solve the damn thing... it's perhaps the most exasperating thing about coding apps.

The only sane way to approach a bug is to step away. We lose as soon as it becomes an obsession, so the only thing to do is to take a deep breath, savor a coffee, and have a walk. Only then can we start a new repository to isolate the bug, search for a solution through trials, experiments, and plenty of Google searches, and fix it. If it takes more than an hour or two, I find it best to leave it for another day and work on something else.

Unit tests are always a good thing to have, but as Donald Knuth said, tests show the presence, not the absence of bugs. It's only one tools among many.

Content delivery network

Efficient caching and high security, at your customer's front door.

Choosing a Programming Language

Some friends asked me the same question: "What programming language should I learn?" I tell them right away to go with Javascript. I assume two things by saying that: 1) they want to make websites and 2) they want to learn a technology that is in-demand and beginner-friendly. All you need is a browser and you can start learning right away.

That's the short answer. The optimal answer is more complicated.

Using the little programming knowledge you already have to make something is more productive than asking yourself what programming language you should learn. Choosing a programming language is not like choosing a Pokemon starter: you can always change it.

The programming concepts you learn are never wasted when you make the switch, so focus on understanding them instead: design patterns, generic programming syntax, basic algorithms, and data structures, versioning, unit testing, documentation... just get better with what you have and make something.

The industry or the company you're aiming at will also impact your linguistic needs.

In theory, you can dive in any field with only one language: you can make video games, build a website, make a desktop app, and even dwell in data science with just C. There is probably a library for whatever you want to do in the language you already know.

In practice, however, each industry has its language of choice: C++ powers the Unreal engine used in many video games, all websites use Javascript, and most data science jobs rely on Python or functional languages such as Clojure or Lisp.

Each language is designed with specific use cases in mind. Some languages are naturally better at a specific task. There are also non-functional requirements to take into account: what's your budget? how hard to maintain the code will be?

Some companies are stuck with legacy code. Do you know which language powers most banking systems? Fricking COBOL. A programming language is not always a logical choice.

The number of job opportunities for a language shouldn't be a criterion. If the job market is bigger, it only means more competition. Learning a more obscure language will allow you to niche down, and if there is more demand than supply, you'll end up in a better position to sell your skills.

There is no easy answer really. Once you start understanding what programming is about at the core, you can find what you want to do with your skills and perhaps learn a new programming language. Don't give too much importance to your programming language, it has to remain a tool to write great software that helps people. Be flexible enough to quickly adapt to the target market, don't be dogmatic.

Client-Side Storage

Web data persisted across multiple sessions is traditionally stored on the back-end of the application, but a new wave of client-side storage technologies is about to change the way web apps are built.

Cookies aren't the only way to store data within the browser. There is the Web Storage API to store user preferences, but also IndexedDB for structured data and service workers to cache entire pages.

Combined, these technologies allow better performance (fast loading speed, smaller response payloads) and better security (offline support: if the network goes out, the data is still available locally to sync). It's an incredible manna for web apps to drastically improve their user experience.

Let's take Cowriters, for example. The new app is coming soon and relies heavily on such browser technologies.

Each user's posts are downloaded locally in the background, in a dedicated IndexedDB. It allows users to browse and search their posts almost instantly from their dashboards.

When a post is being modified, service workers are used to save the text editor's state and sync it with the back-end MySQL database. Even if the electricity goes out, the changes aren't lost and will be sent automatically.

Dark mode and menu states are memorized by the application using localStorage variables.

The Cache API can be used to bookmark articles and read them offline without needing an Internet connection.

A full-stack developer, combining the power of both worlds, can make state-of-the-art apps that are fast and energy-efficient. This is the future, and it lies in-between each realm.

Copy Before Code

A website is a communication medium, first and foremost. The message comes before the code.

When you design a website, you need to start from the copy. It's the reason why anyone would visit your website in the first place.

The copy shapes the code, not the other way around.

I would even propose that copywriting comes before branding: if you want to come up with a business, begin with a blog to nail down your main ideas and value proposition.

Push content as if you wanted to build yourself a second brain. Make it a color palette, a logo, a name, a slogan, a mission statement, a manifesto.

Of course, copywriting is an iterative process: the cycle doesn't end with code, copy only keeps getting refined over and over depending on the real-world feedback you receive.

We tend to underestimate the time it takes to write good copy. Some problems and solutions take days‚ÄĒif not years‚ÄĒof deliberate practice to be formulated in a clear and simple way. Copy without expertise is coarse and shallow.

For all these reasons, clear thinking backed by facts is key. Clear thinking then translates into clear writings that will help you get your point across and grow from the resulting interactions.

Distributed Database

CouchDB + IndexedDB GunJS

Distributed databases enable efficient offline-first web applications, preventing loading times while reducing energy consumption server-side.

Energy Consumption of a Programming Language

Today I read a paper benchmarking different programming languages according to the amount of energy they consume, and it blew me away:

  • Javascript consumes 4.5 times more energy than C
  • PHP consumes 5 times more than Javascript
  • Ruby and Python consume more than twice what PHP does

Interpreted languages, which are so common in web development, are often far more energy intensive than their compiled counterparts. Java is the exception with only twice C's consumption.

This is particularly interesting for me since I'm trying to be more mindful about my tech choices in regard to the environment (with green hosting and offline-first web development). If my website can consume five times less energy by switching from PHP to Javascript, it's indeed very important to take that into account. If I can skip interpreted languages and rely on the .htaccess configuration of my Apache instance (written in C) to serve static web pages, it's even better.

I'm kind of surprised this is the first time someone thought about making this study, 2017 being relatively recent. I stumbled upon it by sheer luck, and no one ever mentions this performance indicator when talking about differences between programming languages.

Worse, I'm trusting web carbon footprint calculators less and less since they don't take into account this basic thing. There is probably an open source project to make around this paper to help the tech industry moves toward a greener future.

Four Sub-Systems Most SaaS Apps Need

If you're a frugal indie maker whose time is extremely limited and spread thin by multiple projects, there are four core services you can automate.

Most SaaS applications are built-upon those sub-systems. If you manage to build one micro-service for each and host them on a personal subdomain, you save yourself countless hours re-developing them for your next project and tons of money on recurring costs.

  1. Authentication/authorization system: Auth0 is nice, but spend three days learning about OAuth, JSON Web Tokens, and reading your back-end web framework's documentation and you'll be able to roll out your own customizable distributed login/signup workflows across your different projects. Just don't forget to encrypt passwords, implement a CSRF token system, and always use HTTPS.
  2. Text editor: A text editor is necessary to create rich content. Refer to my quick overview of what you'll need to build your own.
  3. Mailing service: Emails remain the only way for a user to perform sensitive transactions outside of the app. For example, to change passwords or to confirm an email address. It's also a must-have to inform the users about what's happening: in-app notifications, newsletter, subscription renewal, etc. The problem is that email SaaS bills quickly stack up too if you use services like Mailchimp. As an indie maker with no MRR, you can't afford that. The best way to reduce your email bill is to use AWS Simple Email Service's API. You have to code everything, especially the email editing interface and the tracking features if you need them, but it's simple to get started and the API allows you to do anything. It only costs 10 cents for 1000 emails, you pay for what you actually use, and there is no limit to how many subscribers you can have. Personally, I don't track my users and I prefer plain text, so it's a no-brainer.
  4. Online payment system: Stripe is a game-changer for indie businesses. The API is well-documented and gives a lot of freedom. You still need to code a lot of things though: an interface for your users to visualize their payments and manage them, buttons to connect Stripe accounts to your marketplace, and buttons to receive payments, among others. I use Stripe Checkout and back-end generated session tokens to secure my transactions, but implementing a dedicated micro-service somewhere in my webserver would save a lot of time and reduce code redundancy.

There are possibly similar services you might want to centralize. A web-socket server, for example. But if you manage to automate those four core parts and the way you generate the SCRUD logic of your application, I'm pretty sure you can save 70% of the time you spend developing minimum viable products. The remaining 30% is where the value comes from, so it's in your best interest to free up time for it.

Green Cloud

100% renewable energy, no carbon offsetting gimmick.

How Faster Is A Static-Generated Webpage?

One of the main reasons to prefer pre-rendered websites, also known as static-generated websites, over traditional ones is the loading speed.

A faster website provides a better user experience: according to Neil Patel, you lose 40% of your visitors if it takes more than 3 seconds to load the webpage. It's also an important factor to take into account in your SEO strategy, as confirmed by Google.

But how faster is a static-generated webpage, really? That's what I decided to find out today.

I opened my Cowriters Symfony application, added a test endpoint, and wrote down a few instructions to perform.

I'm not going to compare a static generator like Gatsby or Hugo with a traditional web framework like Symfony, since the two have an entirely different architecture. Instead, I'm going to stick to Symfony throughout the whole experiment.

I first picked a random text in the database and set the number of trials to 10000. Of course, every trial is performed using the same machine.

$slug = 'how-i-got-into-software-development-9775ec3ea0ab0020';
$text = $repository->findOneBySlug($slug);
$trials = 10000;

I then requested the same webpage 10,000 times using a dynamic approach. The web server parses a Twig template, fill in the placeholders with the relevant data, and returns a Response object ready to send back to the web client over HTTP.

$beg_time = microtime(true);
for($i = 0 ; $i < $trials ; $i++){
    $args = array('text' => $text);
    $this->render("pages/article/article.html.twig", $args);
}
$end_time = microtime(true);
$result1 = ($end_time - $beg_time) * 1000 / $trials;

I do the same thing with the static approach after the build phase. The build phase consists of pre-rendering the relevant HTML file from the Twig template and storing it in the web server. Rendering the resulting file consists in reading the pre-rendered file and sends the data over HTTP as an HTML response.

//Build phase
$build_time = $generator->build([
    [
        'template' => 'pages/article/article.html.twig',
        'dest' => "/words/{$slug}",
        'args' => ['text' => $text]
    ]
]);
$build_time *= 1000;

//Rendering phase
$beg_time = microtime(true);
for($i = 0 ; $i < $trials ; $i++){
    $this->renderStatic("/words/{$slug}");
}
$end_time = microtime(true);
$result2 = ($end_time - $beg_time) * 1000 / $trials;

The result is undeniable: over 10k occurrences, static rendering is 6 times faster on average than dynamic rendering (0.27ms vs 0.05ms, with a build time of 0.37ms).

This is a huge improvement over traditional web development. The website scales better, and the individual user experience is more satisfying. And the more data there is to proceed, the faster static rendering becomes compared to dynamically rendering the page each time a visitor shows up.

How To Choose A Web Rendering Method

There are four ways to display webpages: pre-rendering, server-side rendering, client-side rendering, and isomorphic rendering. Each method has its pros and cons and should be carefully implemented according to your specific use case.

If search engine optimization is primordial for your application, client-side rendering is out of the equation. Client-side rendering is more suited to back-office applications or dashboards that are only accessible to authenticated users, for example.

Choosing between pre-rendering, server-side rendering, and isomorphic rendering comes down to the kind of features you'll be implementing and how fast the application data is going to change.

Pre-rendering lets the web server build static files before any incoming request. It's the most performant way for end-users to access webpages since no action is required at run time: the webserver just locates the corresponding web directory/index file and sends it. This is especially great for slow-changing websites like blogs, ebooks, and personal websites.

When the data changes regularly (several times a day), pre-rendering becomes inefficient and we need the server to build the content at run time. Server-side rendering is the most traditional approach since the invention of the web, but not the most scalable. Optimized server-side rendering leverages caching mechanisms to make up for the loss in performance.

Isomorphic-rendering is used when you want the best of all worlds. You can pre-render the content (e.g Gatsby framework) or use a web server to generate dynamic content (e.g any modern multi-purpose web framework), and leverage hydration mechanisms to create a stellar UX. It's also the most complicated method to implement since it usually requires code duplication between the front-end and the back-end.

From my experience, complex applications often require to switch between different rendering technologies whenever you can. Sticking to one paradigm is easier during development time, but it can significantly impact the performance of your app. My advice: use a multi-purpose web framework and create your own boilerplate.

How To Speed Up 10X Your In-App Dashboard With Client Storage

If you built an app, your user probably accesses his data from a central dashboard.

The traditional way to code a dashboard is to fetch the data from a web server, before serving the web page if you use server-side rendering, or by requesting a web service if you perform an asynchronous call from your front-end code. You then have to wait for the data to be processed and arrive at your application.

If the user consults the app several times a day, you have to go through the same process again and again. It's costly for the web server, and it's costly for you if you have to pay for each API request. A web server can only serve so many requests, so it might prevent your app from scaling and make your users unhappy. You will end up paying twice as much: once for duplicate web services that have been provided to your application, and a second time for the business loss it will create.

What if you could send the request once, save the data you need for later in the user's computer, and ask again for new data later on? That's what a Client-Side Storage API allows you to do.

There are three different ways to store data at a local level: Session Storage and Local Storage, which are part of the Web Storage API, and the IndexedDB API. The latter is the one we need to store a big amount of indexed data for our dashboard. Web Storage is great for a small amount of data, like settings parameters, but they aren't indexed to increase the searching speed.

When a user logs in his dashboard for the first time, the application requests the data from the external web service and stores it in the IndexedDB database. The data will remain available from one session to the next, with or without an Internet connection (Progressive Web Application). This leads to near-instant loading time on repeat visits.

The user applies changes to the IndexedDB database, and the application takes care of sending the local data once in a while to the external web service. This way, you can fine-tune how much data you need to transport and how often you want to do it.

If the user logs out or changes his access location, the data is synchronized beforehand with the external web server, so the application can redownload it locally somewhere else.

When logging out or after an expiration date, we can also tell our app to erase data from the web browser, leaving no trace for anyone else to read or use. You never want to store sensitive data on the client's side, of course, so you should only share what's strictly necessary and keep the local database clean at all times.

Intro To The MVC Architecture

Writing clean code is as important as having a functional software program. This is why most modern web applications are organized following a MVC architecture.

MVC stands for Model-View-Controller. When you use a modern web framework like Symfony, Laravel, or Ruby on Rails, you implicitly follow a MVC architecture to make your code easier to read.

The Model is the part managing the data, logic, and rules of your application. This is where you write the web services that will interact with the data access layer.

The View presents the data. It can include user interface components or JSON, for example. When you use a front-end framework like React or Vue, you work at the View level.

The Controller is the bridge between the View and the Model, handling all the user interactions with the application. It defines the routes, handles all the incoming requests, and translates them into commands for the Model to generate the View.

Ideally, you want the Controller part to be as small as possible and leave the data processing to the Model: ‚ÄúFat Model, Skinny Controller‚ÄĚ.

Of course, MVC is not the only way to build modern web applications. But it's the most common one and other alternatives follow the similar approach of separating the presentation from the rendering process and the business logic.

Isomorphic, then Static, then Single Page

Three months ago, I wrote about why we need to combine different web rendering strategies to have an optimal balance between SEO and user experience.

As I am in the middle of rewriting Cowriters' codebase following my own advice, I found out it's much more efficient to take into account each rendering method one after the other instead of implementing all of them at once.

Adding all rendering methods in parallel increases the time needed to obtain a full prototype, which isn't a viable option when you are building a startup: fast iterations are key to receive precious feedback, so the search for performance cannot be allowed to gain the upper hand.

When you take into account isomorphic rendering, client-side rendering, and pre-rendering, you need to duplicate some logic to complete a webpage: the page has to be loaded from the server dynamically or from a static file before being hydrated by the front-end framework, and you need to develop controllers from the web server as well as in-app routes (including duplicate authorization and data fetching logic). It's a lot of work.

If you have to do all three, it's best to start with an isomorphic approach. If you start with client-side rendering, you lose all SEO benefits, which is not acceptable for a content website. If you start with pre-rendering, you'll lose a ton of time building and rebuilding your static webpages to make everything work. Isomorphic rendering gives you an app that works so that you can receive instant feedback on your coding. It's also a great boilerplate to implement static generation and client-side routing in a future iteration loop.

While implementing hybrid rendering, you want to avoid meddling with in-app routing. You can fetch resources asynchronously at runtime instead of serving them, but you can't lose time on duplicating your routing logic to make it work on the client-side without refresh. Simply use regular links and server routes to navigate.

Once you're done with that and managed to put a first version in front of your stakeholders, you can add static support. It should be child play since you already managed to output the content; you just have to write it to a text file and tell the server which file is which webpage.

At this point, your app has perfect SEO and Performance Lighthouse scores, but you want to be able to seamlessly go from one page to another instantly thanks to in-app routing. Not having to wait just feels great, the reason is that simple.

That's when you have to sync the back-end routing logic with the front-end and request the data you need from one page to another, before the user even asks for it.

During the isomorphic implementation, each webpage will have its own dedicated Javascript bundle. In the last phase, all pages share a common bundle to handle the routing logic. You'll have to fine-tune your Javascript and lazy load components to keep the size under control, but it will be worth it.

Javascript Or Not

The goal = distributed computing => from within the browser => Javascript is mandatory

Markdown-Based Static Blog

My new personal website is entirely static-generated, including the blog. It loads lightning-fast, even though it contains more than 400 articles. It's also markdown-based, which allows me to remove the need for a database while increasing the security of my website and allowing me to publish faster.

To do that, I built my own blog engine using the Symfony framework. It works just like Gatsby's blog starter. You have a blog directory in your application tree where you add articles. Each article is a directory containing one markdown file and the assets you need (pictures, etc.). The markdown file has a front-matter containing metadata written in YAML. It looks like this:

---
title: "3 Days Till Christmas Eve"
date: "2019-12-21T21:37:27.000Z"
description: "My brother arrived at the family house. His semester just ended, which means there are only three days left before the big Christmas Eve dinner. We will all gather at my uncles'. I say \"all\", but we ..."
tags: ["thoughts"]
---
My brother arrived at the family house. His semester just ended, which means there are only three days left before the big Christmas Eve dinner.
We will all gather at my uncles'. I say "all", but we never managed to get every member of the family in the same place. We are just too spread out.
My father has 11 brothers and sisters, so I have many cousins.

All I need to generate a blog from these directories is a Markdown service that I will call at build time.

It's a simple 3-step process: you list down the articles recursively starting from the root blog directory, parse the markdown file to extract the content and the metadata, and feed all those parameters to a Twig template that will be used by the static file generator.

It's fast, simple, and extensible. I manage to build my blog in less than half a second and added features like tag management and previous/next buttons.

An improved static blog generation feature will be integrated to the new Cowriters engine I'm working on, and available for a small additional monthly cost (starting from $3). Inspired by my work on Bouquin, it will allow writers to convert collections into full-blown blog applications with their own domain names.

On a JAMStack/Isomorphic-Rendered Hybrid Application

One thing I learned in college from Business Intelligence‚ÄĒthe study of how to transform raw data into useful information‚ÄĒis that each data doesn't have the same granularity. Hence, some database tables, also known as dimensions, cannot be updated at the same rate. There are slowly-changing dimensions you can update once a week or once a year (e.g your business' yearly profit), and fast-changing ones you need to change in real-time (heart rate monitoring, for example).

Similarly, web pages within the same website do not forcibly change at the same speed.

It is common, however, to build each website as if the change rate was constant across the entirety of it. It's a useful approximation because it simplifies the overall software architecture. Websites are either static-generated with frameworks like GatsbyJS or Hugo (JAMStack), or dynamically-generated on the server, the client, or both (isomorphic rendering).

Each paradigm has its pros and cons.

The JAMStack paradigm is incredibly performant at building fast websites that scale. The webpages are pre-rendered and don't rely on a web server to be brought to an end-user. The content can be updated in real-time using client-side code and APIs.

It doesn't sit well with data that needs to change frequently. For example, building a real-time news feed would imply the need to trigger builds at fixed intervals using cron jobs. In Gatsby's case, it's not even possible to change a small part of the website without re-building the whole, and it takes several minutes to deploy changes.

Fully static-generated websites are suitable for personal websites and blogs. If you build a social network, you need a more traditional approach.

Fast-changing web pages are better off with an offline-first isomorphic approach: server-side rendered for SEO, cached by service workers, and updated on the client-side.

Client-side rendering is awesome, but it hurts the discoverability of your content. Fortunately, every web application doesn't need SEO, but if you are Medium, you'll need search engines to see the latest content. Twitter is a counter-example of an app built with client-side rendering: you see a loading spinner for a brief moment, and then you are served the latest tweets.

Client-side rendering is characteristic of Single Page Applications: the navigation between the different pages is smoother, you don't need to redirect the user and its data between different parts of the website.

Server-side is the oldest approach. A browser asks a web server for a resource defined by a URL, and sends back HTML content. There is no additional loading time and you can directly read the content. Search engines love that, and if the loading time isn't too high, your users will love it too.

Isomorphic-rendered applications take the best of both worlds. Search engines can see content generated on the server-side, but each subsequent update will be performed for the user on the client-side and will smooth up the web experience. The issue is that operating on both sides can lead to additional code complexity and redundancy.

Some websites would benefit from a hybrid approach. I'm thinking of web apps like Product Hunt, Makerlog, Medium, or even The Co-Writers. Slowly-changing web pages would be generated according to the JAMStack paradigm, and fast-changing ones with isomorphic generation.

In those cases, makers shouldn't limit themselves to one paradigm and use frameworks that aren't forcing them into one. Full-stack web frameworks such as Laravel, Django, Ruby on Rails, or Symfony come to mind. This way, the added complexity is limited since you just work with one single monolithic framework. Don't use GatsbyJS and PHP in parallel, for example. Here is how I would proceed:

  1. Analyze the different web pages of your application and figure out how fast they are likely to change. Make a distinction between fast-changing and slow-changing web pages.
  2. Use an isomorphic approach on fast-changing web pages: generate app shells with the newest content on the server-side, cache them using service workers (Progressive Web App), and use hydration mechanisms (ReactDOM.hydrate for ReactJS, data-server-rendered="true" for VueJS) on the client-side. You'll probably have redundant code shared between both sides, but it's a necessary evil in my opinion.
  3. Use built-in templating engines, .htaccess rules, and binary file responses to serve authorized pre-rendered static web pages when the main content isn't likely to change (e.g: an article page). Use service workers for caching and client-side functions and APIs to load the non-core parts of the page (the comment section) or hydrate sections with rich Javascript features (Medium's toolbar when you select a part of an article). Rebuild the static pages one at a time when the database is updated or using a less frequent cron job.

The only pain point I have at the moment is managing user sessions between static and dynamic pages to perform authenticated actions because I can't pass data to static-generated pages the usual way. I might have found a solution though, inspired by how Auth0 does it. I would need to pass an access token in the URL, fetch it, and rewrite the URL on the client-side. The access token is then stored in a variable at a namespace-scope that is inaccessible from the console, updated depending on its expiry date, and used anytime the user needs to perform an action.

This is how I would build a high-performance web application for The Co-Writers 2.0.

On Code Generators

Developers underestimate the power of code generators.

We don't always need more abstraction in the form of libraries, frameworks, or software products. When we are used to coding features a certain way, it isn't always in our best interest to make things simpler in the long run at the expense of a steep learning curve that will decrease your productivity here and now. Simple scripts can automate repetitive tasks without you having to reinvent your workflow.

Writing RESTful APIs is a striking example.

We could easily buy no-code tools to do the job for us, but we would decrease our profits, be forced to learn a new tool, suffer vendor lock-in, and sacrifice our customization capabilities. More importantly, the abstraction layer would prevent us from fine-tuning our code to better serve the business logic, and a limited programming interface could forbid us from integrating the tool in an automation pipeline.

What if we could generate web API code without having to even think about it instead? A web API is simply a proxy between an application and a database, so we can generate code to do just that simply by knowing the tables and the different fields. Each table can be formalized as an interoperable JSON schema file, for example:

tables.json

{
  "name": "Comment",
  "fields": {
      "id": {"type": "int"},
      "text_fk": {"type": "Text"},
      "user_fk": {"type": "WritelierUser"},
      "content": {"type": "longtext"}
  }
}

The stakeholders can agree on code convention beforehand and write a reusable template for each table and data access layer:

crud_service.ejs

module.exports = {
  readOne: (args) => {        
      return SELECT * FROM <%=table%> WHERE id='${args.id}'
  }
}

api_router.ejs

import express from 'express';
import mysql from '../../service/utils/mysql';

var <%=table.toLowerCase()%>_service = require('../../service/crud/<%=table.toLowerCase()%>')

let router = express.Router();

router.get('/:id', (req, res, next) => {
  mysql.query(<%=table.toLowerCase()%>_service.readOne(req.params.id))
  .then(result => {
    res.send({ok: true, <%=table.toLowerCase()%>: result})
  })
})

We can then simply copy/paste the resulting code as needed in a server.

controller/api/comment.js

import express from 'express';
import mysql from '../../service/utils/mysql';

var comment_service = require('../../service/crud/comment_service')

let router = express.Router();

router.get('/:id', (req, res, next) => {
  mysql.query(comment_service.readOne(req.params.id))
  .then(result => {
    res.send({ok: true, comment: result})
  })
}) 

The code can still be adapted to serve specific requirements in one fell swoop (you could, for example, want to combine MySQL statements to avoid sending more HTTP requests), and it didn't cost a thing.

We can also expand the generator to take into account the entire tech stack. For example, we could write a template to render standard React components from the table definition. It could be useful to quickly go through stored data from a back-office application.

If you write 60 lines of code on a good day, having the possibility to instantly output thousands of lines is a non-negligible productivity boost. The sky is the limit!

Optimizing My Coding Process

As a software engineer, I find the no-code movement inspiring. Tech is an onion where each layer abstracts away the underlying complexity. Personal computers as we know them didn't exist 50 years ago. Now, everyone uses operating systems with built-in graphical user interfaces and we don't think twice about it. The no-code movement is just another logical step toward more abstraction: if you're not automating your process, someone else will do it for you. This is a great opportunity for programmers to make new digital products.

One thing I've wanted to do for a long time now is to create a meta-programming framework to automate my most tedious coding workflow.

50% of the time I spend on back-end work is about creating the SCRUD logic, from the data access layer to application programming interface design. Technically, I could create a piece of software that takes a database description as an input (like, say, a YAML recipe describing the tables and their relationships) and output a full-stack application code using a template engine: CRUD React components, a Symfony service layer, front-end and back-end API code, common utility functions... I would have it all in just one command line.

Imagine how easy it would be to release complex custom software with this kind of code generator. I could churn out minimum viable products like words on a piece of paper.

Better, it wouldn't cost a thing. The main problem with most low-code services is how the monthly subscription fees stack up. My current stack costs me about 30 bucks per month. It scales well, so why should I learn yet another tool when I could automate myself instead? In my case, time is not the issue, but my burn rate is.

Not sure when I will have time to make this little helper though.

Organizing My Symfony Projects

Symfony is the first and only web framework I have ever used. It's an MVC abstraction of the PHP programming language that allows developers to boost their development speed while keeping application performances high. My tech stack is a bit more complicated than that however, so I find it interesting to dive into it.

Symfony is a flexible full-stack framework, so you can work on every part of your application. My project repository is organized as follow:

/assets
  /css
  /fonts
  /icons
  /js
    /api
    /components
    /pages
    /shell
/blog
/public
  /build
  /img
  /upload
/config
/src
  /Controller
    /API
    /Cron
    /Webhook
  /DataFixtures
  /Entity
  /Security
  /Service
/static
/templates
/tests
.env
webpack.config.js
tailwind.config.js

I let PHP handle most back-end tasks in the /src folder.

Doctrine is used as an Object Relational Mapper to manage the data access layer and convert database entities into PHP classes (/src/Entity).

The Model is located in different repositories: /src/Service for the services used to manage the application (CRUD and others) and /src/Security to take care of authentication and authorization.

The Controller is in the /src/Controller repository. I add my application routes here. Most of them return dynamic HTML pages, but I put there API endpoints, Cronjobs, and webhooks.

Most of the back-end configuration (database accesses, API keys, etc.) is done using a root .env file and YAML files located in the /config directory.

Of course, unit testing is not neglected, which is why there are a /test and a /src/DataFixtures directories. PHPUnit is used to run tests, and Doctrine takes care of inserting fixtures.

The View is handled differently depending on which web development philosophy I choose. /templates contains Twig templates that are consumed by the back-end to create rich webpages or assets. When I use a static approach, all the files are written to the /static directory. Otherwise, they are directly generated and sent to the user at run time.

I use a combination of React, TailwindCSS, and SASS as a front-end framework. All my assets are located in the /assets directory and dynamically built by Webpack Encore (Symfony's Webpack flavor).

My Javascript source folder is divided between API functions, generic React components used throughout the application, and page-specific React components. I'm also adopting an offline-first approach, hence the /shell directory.

Production Pre-Rendering

In an ideal world, a static website generator rebuilds its webpages independently, depending on content changes: if the webpack bundle is modified, you rebuild every webpage, but if it's a tiny sentence modification, you only want to rebuild the corresponding webpage.

The reason is simple: the more processing you do, the longer the build time and the more server resource you need. Having full control over the build process reduces running costs.

The difference might appear negligible for small websites at first, but not when it's compounded over months, if not years of production environment. Properly done, pre-rendering websites should make them virtually free to publish and maintain: they don't require much power to generate, almost none to serve when using a content delivery network and a web server like Nginx, and very little data transfer with offline caching using service workers.

Static web frameworks are still in their infancies. With 35% of the web being powered by Wordpress, there is an incredible opportunity to drive costs down while reducing the global electricity consumption. We only need new frameworks to try new things and make this dream a reality.

JAMStack is great to build simple websites, but it's not a solution for every web engineering problem. We can, however, integrate most its principles to more traditional tech stack to achieve incredible results.

Proper Web Search

how I speed up search

Real-Time Collaboration

Multiplayer is the norm in the gaming industry. It wasn't as commonplace back when I was a child, so my brother and I were extra careful about the games we bought regarding that.

Multiplayer meant playing with friends, so it was a good excuse to have them come over and have a great time. That was before the rise of the massively multiplayer online games we have nowadays.

Similarly, collaborative features have yet to be the norm in the software industry. They are often nice-to-have premium features, rather than a real need. Real-time collaboration is even rarer.

I think it's starting to change thanks to the pandemic and technologies like Web RTC or Web sockets that are becoming easier to integrate. Even though these technologies have been around for a long time, the concrete implementations and the full-fledged libraries are only getting better.

I spent the last 3 days tinkering with a Javascript package called yjs, and I'm truly amazed by its capabilities to power true collaborative experiences. Real-time collaboration is quite hard to implement because changes happen in parallel, by definition. You are bound to encounter conflicts between changes, so the problem is to tell our application how to handle such conflicts in an efficient manner.

Developers are familiar with git versioning, but in a real-time environment, we can't possibly ask the participants to resolve each conflicting change to obtain a single source of truth. So instead of having a formal git-like workflow, we use algorithms to solve them as best as possible without the need for a human to intervene.

Most algorithms work well for small documents, it gets really hard to handle all the conflicts at scale. That's why only 50 people can access a given Google Docs at the same time.

Recent solutions like yjs however, implement new algorithms that scale much better and take the whole collaborative experience to a whole new level. It even works when one of the participants perform changes while being offline, with very little extra code.

I'm quite excited to see where the space goes, as it will certainly unlock a new generation of collaborative products.

Remote Work

less fuel alter-nomadism

Reuse

Even if a digital product is short-lived, well-architectured code can be reused to scaffold pivots, or even unrelated applications: implementing a simple JSON Web Token authentication workflow or a payment gateway as standalone web services, and you would still be able to use them in your next product. We could call it Circular Web Development, because every module of code can be reused and wastes are progressively eliminated through each iterative cycle. Web applications are optimized for performance and resource consumption by design.

Indie vs Big Tech

And don't get me started on the FAANG monopoly over web infrastructures. Ironically, there wouldn't be indie startups without FAANG companies, because all of them use AWS, Firebase, or similar solutions.

Web services

Switching your programming language from PHP to Rust decreases your energy consumption and processing time by 3000%. For example, writing web services in Rust or Go instead of Ruby or Python would result in 60 to 70 times less energy consumption.

Pre-rendering

Drastically reduces the amount of server processing while increasing loading speed.

The Maths of a Web Hosting Provider

I'm wondering whether it's possible to make a profitable web hosting business as a side project in 2020.

A website cannot exist without web hosting, so you have to pay a certain amount of money to keep it alive. Static websites that do not change very fast (weekly) can use free hosting in the form of services like Netlify, Github Pages, or Vercel, but you are bound to pay at some point if your project sees any growth.

Being a sysadmin demands a whole skillset, but there are things where you can do yourself if you niche down: setting up a web server like Apache or Nginx is a well-documented process, for example. As long as you're willing to learn and don't chew more than you can bite, there are a few things you can do for money.

If you specialize in Wordpress hosting, you can already address a huge market‚ÄĒWordpress representing 35% of all websites. But you need quite a lot of RAM to make it run: at least 1Gb per website according to hosting provider SSD Nodes.

If I were to do this, I would focus on increasing my Return on Investment by investing in cheaper yet performant Raspberry Pi single-board computers, from 2 to 8 Gb, turn them into server clusters, and focus on static hosting with headless content management systems.

A Raspberry Pi 4 Model B with 8Gb costs $75. If I were to host 8 Wordpress websites with it at $5 per month, I'd cover my initial investment in 2 months. On the other hand, I could drastically increase profitability by leveraging a JAMStack architecture: shared back-end micro-services, one common administration platform, and static web pages served to the end-users. I would need less RAM, since resources are shared among my customers, while increasing performance.

I also need to take into account the electricity bill, but I wouldn't need cooling tech since Raspberry Pi don't seem to need any (embedded architecture). The electricity bill could be reduced by using an off-grid network running on renewables, but I'd still need it for any emergency measure.

The big pain point in this business are the Service-Level Agreements: you need to be up at all time, and this is where a background in engineering comes handy because you would need to design secure, redundant systems. It's incredibly hard. Of course, I'd also have to stay close to the servers to perform maintenance tasks and make sure the hardware stays safe.

In conclusion, web hosting is a job and it's very unlikely I could do it part-time, let alone provide business-level SLAs. The alternative could be to make it a hobby and use it to host small websites where risks are low (e.g, my personal website or web experiments).

The Offline-First Movement

I'm big into offline-first these days. It started with GatsbyJS, then I built Bouquin in PHP and created my first progressive web apps. Now I want to add offline-first functionalities to The Co-Writers while I'm improving the content management UX. You'd be able to read, write, and organize posts and collections without an internet connection.

Offline-first decreases the amount of data you need to make an app work. You load it once, cache it, and update just what needs to be changed. As a software engineer, I find it extremely elegant because it's the most optimal way to increase performance while reducing my app's network consumption.

It's a form of digital minimalism too because it allows developers and users to do more with fewer resources. Applications like Facebook drain your data because it's ephemeral, but even browsing regular websites can quickly add up and literally make you lose money on data plans.

If I load the landing page of the Washington Post, I've already put 5MB down the drain. If I try to refresh the page, I'm wasting 5MB because the web page has to be downloaded all over again. If the website was built offline-first, I would download the app shell once, the articles would be cached for a day or two, and it would load much faster.

Now, another concern I have is environmentalism. The Co-Writers is powered by renewable energy, but it would be even better if I could reduce the website's consumption in the first place.

Those are the main reasons why I'm so excited about the new capabilities offered by service workers and the offline-first movement. If I were to build a web development consulting business in 2020, it would definitely be about designing offline-first web applications with new trends like progressive web apps, static-generated apps, and JAMStack.

The Problem With Jamstack Frameworks

Jamstack frameworks as we know them aren't great for fast-changing web applications: the build time is too long (NextJS, Gatsby), or we can't use web components easily (Hugo).

Even with NextJS's incremental static regeneration feature, the pre-rendering logic is far from optimal: I do not want my app to run checks on each request.

In the end, you can't do Jamstack without a server-side generator ran by your hosting provider. Serverless removes the need to pay and manage a web server, but you still end up limited by the capability of the framework you use.

As soon as you need a smart logic for the way you build your static files, you're better off with your own server application.

This is the case for Cowriters, for example: if a member publishes an article, I'd need to re-render his profile page, the main feed, and the relevant collection pages. If I have to rebuild the entire website every time, I will never be able to scale. Same if NextJS checks on my local folder every time it receives a request for one of the tens of thousands of webpages I manage. Having an atomic rebuild feature to handle these complex use cases becomes necessary, and it cannot be done without an unshackled server (aka a nodejs or php or python or rails script) to orchestrate everything.

The reason why frameworks like Gatsby or NextJS are always smarter once you use them together with their respective hosting platform (Gatsby Cloud or Vercel) is precisely because of the server-side logic that is custom-made for them. This is vendor lock-in at its finest: they give you a free dose, but you still have to pay extra to get the whole experience. This wouldn't be the case with a more traditional web framework.

Trees and Oxygen

According to Website Carbon, 10k monthly page views on Cowriters emit the amount of carbon that four trees absorb in a year.

Turns out it's been pretty much the case over the last 6 months.

Since I'm using a green provider for my web server, the carbon emission is already offset, but I like to think my parents did.

When they moved out to their current house, there were about six trees spread over 2000m².

15 years later, the garden has now several creeping plants, shrubs, and new trees. An apple tree, a pear tree, wild cherries, and hazelnuts. But also non-fruit trees. The diversity is much better than it used to, so it attracts shrews, birds, hedgehogs, lizards, and all kinds of birds.

Over the next years, I'd really like to help out more at the garden. Not just to grow their legacy, but also to learn more about how things grow and how I can minimize my carbon footprint. According to my approximations, my website activities should have a negative carbon footprint. I only need to maintain the current biodiversity, expand it wherever possible, and increase my app's performance.

User Interface

SvelteJS

Minimalist Javascript bundles, without abandoning modern user interfaces.

Web Assembly

Top-of-the-line code performance and lightweight packages for even faster applications without leaving the browser.

Web Carbon Metrics

The more papers I read about the energy consumption of the ICT industry, the less I trust website carbon footprint calculators. They are inaccurate at best, fraudulent at worst.

The carbon emission of a website can be estimated from the amount of data transferred to load it, but it's only a part of the bigger picture. We identify in fact four main emission sources‚Äď infrastructure (cooling, power, etc.), storage, transport, and processing‚Äďwith varying importance depending on what kind of web application we talk about. For example, the carbon emission from the transport layer is non-negligible in a real-time collaborative editor like Google Docs (each keystroke being monitored), while it's not the case for most static websites.

An objective metric needs to take into account many parameters that aren't forcibly easy to obtain. Fortunately, we can agree on a set of principles:

  1. The app needs to perform fewer requests.
  2. Requests must have smaller payloads.
  3. Request endpoints need to be as close to the client application as possible.
  4. Software programs must be efficient.
  5. Stored data must be cleaned and compressed as frequently as possible.
  6. Servers and third-party tools need to run on renewable energy.

These six principles cover most actions web developers can take to minimize their software footprint. Just like recycling at home, it's not because we cannot quantify the result that the benefits do not exist: it's simply common sense. We need to do our best, and it's dangerous to follow inaccurate metrics telling us it's ok to rest on our laurels once we reach a certain point. Total continuous improvement is the only way forward.

Web Server

Nginx, Caddy

The fastest highly-configurable web servers and reverse proxies.

Web workers

Parallel processing from within the web browser, for near-instant loading times. Offline-first architectures drastically reduce the amount of traffic generated by a given app, and web browsers now propose exciting features to create lightning-fast experiences.

What's a Service Worker?

Web browsers evolved tremendously since the 80's and Tim Berners-Lee's first attempt at a software application designed to interact with web servers‚ÄĒnow closer to full-blown virtual machines than mere applications.

We have reached a point where web browsers can seamlessly provide both rich offline and online experiences, by leveraging the same technologies used to build websites: this is why service workers are particularly interesting for modern web developers wanting to build resilient and performant web applications.

A service worker is similar to a daemon program you can find in any operating system: it's a script running in the environment's background, without the need of a user interaction or an open webpage. This is why you can use it without an Internet connection, even though you can interact with service workers from within a webpage using Javascript.

A service worker can be used offline, but you need to install it first by requesting it from a web server. The great thing is you don't need to bother the user to install it, like you would with a traditional mobile app for example. Instead, the browser handles everything.

Even though you can use a service worker offline, it is restricted to the browser environment. You can't control a computer remotely or leverage all its features. This is the cons of a service worker when you compare it to what a mobile application can do, like accessing local files from a user's phone. It's not very useful for most websites though.

The business applications are promising: we could theoretically replace any media file with an app using HTML, CSS, and Javascript. No need for a PDF viewer or a MP3 reader, you could just browse any document in a native web format you have total control over, without needing to stay online.

What's an API?

There are many ways to interact with software.

Everyone is acquainted with the concept of graphical user interface (GUI) to browse the web or type a document. You click, scroll, or hover somewhere and an action is triggered.

OSX and Linux users might also be familiar with command line interfaces (CLI), which are simply text-based user interfaces to perform the same things you would with a mouse.

Sometimes, however, you need two distinct software systems (applications, services, operating services, etc.) to work together seamlessly.

You might, for example, want to create a web service to deal with emails and a website. Dividing those two systems improves the modularity of your software architecture: it's clearer for developers to maintain and more performant to use.

That's when you resort to application programming interfaces (aka API) to make those two pieces communicate.

An API is simply a programmatic interface you can use to access software services. Unlike GUI and CLI, APIs are meant for machines, and thus hardly readable for humans.

In a web context, APIs define web services or resources in the form of detailed specifications: message formats, usage rate, and application protocols (mainly HTTP, but can also be SMTP for example), among others. REST (representational state transfer) is the most common way for web developers to design an API, but it's not the only way: SOAP (semantic web access protocol) and the latest GraphQL technology, to quote only two.

What's Machine Learning?

At the end of my engineering studies, I knew I wanted to work in two tech sub-fields: web development and data science.

Machine learning is a subset in the vast field of data science.

Nowadays, the computational power, as well as the telecom and storage infrastructure, have dramatically improved: it's not only possible to generate a lot of data, but also to collect and store it.

The problem is that we lack techniques to extract useful knowledge from this big raw amount of data, so we need new methods to analyze it and make decisions based on the newly-found facts.

That's where machine learning comes in, with the goal to develop artificial systems able to improve their performance with experience. There are mainly two kinds of jobs we use machine learning for: supervised learning, where we classify new objects, and unsupervised learning, where data scientists output useful groupings of objects.

Machine learning can be broken down in six phases:

  • Business understanding, to understand the business objectives and requirements as to convert them into a data mining problem definition.
  • Data understanding, which is about getting data and getting familiar with it (we need to understand what's in it: its quality, its context, and its features).
  • Data preparation, the phase where we build a dataset that we will feed to our data mining algorithms (data cleaning, data transformation, data sorting, etc.).
  • Modeling, where we let the algorithm run to create the data model that will make future business decisions.
  • Evaluation, to measure the model's level of quality and if whether yes or no it fulfills the business' objectives.
  • Deployment, aiming at organizing the extracted knowledge to make it understandable. The data mining process also has to be made repeatable for future use.

Personally, I'm interested in applying machine learning to text processing. That's one of the things I try to study when I find the time, and one of the next features of Cowriters uses one thing I learned from it.

Why Document Versioning?

Versioning tools have been around for a few decades, but most people still have the nasty habit to manually version their documents in the likes of report_FINALFINALFINALversion.docx. And I can't blame them‚ÄĒtext editing software is often terrible to work with.

Why do we even version documents in the first place? Two reasons: time travel, and collaboration.

First, versioning allows writers to travel back in time. With versions, we can trace every modification brought to a document to either consult or revert them. It's a precious tool when a text is bound to change. You might for example need to rewrite parts of a blog post or trim notes: versioning allows you to keep every past information without having to worry about losing valuable content‚ÄĒno change is definite.

If you write a book, you are likely to share your manuscript with a variety of people: versioning is also a fundamental tool for cooperation. Versions can be analyzed by software tools to compare and merge conflicting changes, without the need to handle them yourself. It's particularly nice as an author when you don't want to go through your own book for the tenth time.

As Karl Popper once said, no book can ever be finished. If a writer needs to create new book editions, having a versioned file is a great way to tell readers what changed and what remained. It's a great way to create trust and develop a loyal audience.

Why Versioning?

Versioning is a tool used to manage the evolution of a software project's source code over time.

Let's imagine we have a text file containing phone numbers from people close to us.

Anyone should be able to update his own phone number while still being able to see the same information, so we put this file in a public folder in Google Drive. We can share and access it with a link, and modify it in a collaborative fashion.

We also want to see all the changes made over time. This way, if a phone number doesn't work, we can still try older ones. For each change, a new version of the file has to be created. We've probably all have been in the case where we have to write a report and keep renaming it "report_latestlatestversion.docx". That's a typical problem.

If a person has two phone numbers, we want to agree on a single one to use for the sake of clarity. We thus need a process to ensure a single truth to avoid conflicting information to enter our spreadsheet.

Software versioning allows all that, and more, in a single place.

It helps developers collaborate on a complex tree of files while helping to prevent mistakes from entering the code or the production environment.

Each change is authorized, tracked, and can be reverted. When you work on a new feature or need to fix a bug, you can create a new version of the project without breaking anything that already worked. This alone helps speed up the delivery of new software updates.

If conflicts between two versions arise, when two developers work on the same file at the same time for example, the versioning tool gives us an interface to operate changes at a more granular level.

A software project doesn't live on some developer's computer. Instead, it's stored using versioning. Even if a developer gets his laptop stolen, he can still recover all the source code from the centralized versioned repository.

Over the last years, version control systems like Git or SVN have become an essential part of software development, sometimes assimilated to the process of automating the release of new software changes also known as CI/CD (Continuous Improvement / Continuous Delivery). You can't work at a professional level without it, so it's important for you to leverage this kind of tool early on when you're learning how to code.

Implementations

Another JAMStack Framework?

I love Gatsby and NextJs, but I don't like being tied down to their respective back-end infrastructure to handle advanced back-end use cases like incremental pre-rendering to reduce build time. The pricing is ridiculous when you compare it to a traditional cloud hosting plan.

What full-stack web developers need is a NodeJS framework that takes a MERN stack (MongoDB + Express + React + Node) and makes it JAMStack compatible. BlitzJS already showed it's possible, but the underlying use of NextJS is problematic, as I previously explained.

We desperately need a tool to have the best of both worlds. Serverless is all you need when you have a medium-sized static website or a fast-changing app that doesn't require SEO, but you still need a server to manage complex configurations. That's one of the reasons why JAMStack didn't replace more traditional frameworks like Ruby on Rails, Laravel, Django, or ExpressJS.

On the opposite side, we can't afford to keep using traditional web infrastructures requiring more resources and providing less optimal user experiences. Information technologies emit as much CO2 as the aviation industry, so reducing every website and app data footprint will have a positive impact on everyone.

The framework of the future will be light-weight, fast to learn, develop with, build, and load, and easily deployed to any green hosting provider at a fraction of the standard industry costs. Opinionated and convention-based, but flexible when you need it to be.

The way I see it today, it would be a mix of Node for static file generation, Express to handle APIs and data access, Nginx to serve static content, and Preact to create interactive user interfaces. The rendering approach would be based on isomorphic rendering with Preact code transformed into static HTML files and hydrated on the client-side. The Express web server should never serve HTML itself and leave it to Nginx, since Nginx is written in C and will thus be much faster and more energy-efficient than Express to do that.

Building a text editor

When I started The Co-Writers, I already had experience building text editors. They look simple, but they are in fact incredibly hard to get right. A few items to keep in mind if you'd like to build your own text editor:

  1. Text representation: avoid storing HTML in your database, and prefer instead editor state objects. I tried many plugins, but none beat DraftJS so far. The learning curve is high, but the level of customization is outstanding. A text editor built using DraftJS uses a JSON object to manage its state, which can later easily be converted into other text formats. Storing state objects is more secure because it makes it harder to inject code. It's also the only way to display content consistently across different web browsers, and to simply add advanced features such as real-time collaboration or complex custom text entities (mentions, hashtags, etc.).
  2. Autosave: it's 2020, you don't expect your users to manually save their content every ten seconds. You have to regularly store the state of the text editor to avoid data loss. An autosave call has to be debounced to reduce the load on the server. You also have to take into account network errors: an autosave feature should be implemented with offline in mind, using web APIs such as Background Sync and Web Storage.
  3. User Interface: Medium is probably the standard in terms of editorial experience. It's clean, minimal, and easy to use. Add inline Markdown support (typing Markdown automatically transforms it into its WYSIWYG equivalent), and you obtain a state-of-the-art text editor.
  4. External tools: Rich text editor plugins based on an internal state representation rarely work with external tools modifying the DOM, such as Grammarly. Integrating natural language processing tools within your tech stack is often the best option. Spell checker instead of Grammarly, for example.

Building your own text editor is a lot of work, so do not take it lightly.

Building Your Own Static Website Generator

Switching your website from a dynamic approach to a static one doesn't imply learning a new technology stack. In fact, you can turn almost any dynamic web framework into a static website generator.

Modern web frameworks are incredibly flexible, you can do virtually anything from your local environment: create command lines, API endpoints, manage the file system, etc.

A website is a just a directory with a bunch of documents inside. Static-generating a website means pre-rendering those documents as to make them readily accessible without the need for complex CPU-intensive tasks down the road. Changing the website's content means re-rendering its web pages, but it's not as resource demanding as asking a web server to re-render resources on each visit.

When you transform a web framework like Symfony, Laravel, or Ruby on Rails into a website generator, your web pages are still built using a templating engine and services located in your Model layer. All you need is a command line or an endpoint in your Controller layer to write the Build logic.

Here is for example the static file generator service I wrote in my local Symfony application:

<?php
namespace App\Service;
use Twig\Environment as TemplatingService;

class StaticFileGenerator {
    private $templating;
    private $folder;
    
    public function __construct(
        TemplatingService $templating
    ){
        $this->templating = $templating;
        $this->folder = __DIR__ . '/../../static/';
    }
   
    public function render($template, $dest, $args = []){
        if(!file_exists(dirname($this->folder . $dest))){
            mkdir(dirname($this->folder . $dest));
        }
        
        file_put_contents(
            $this->folder . $dest,
            $this->templating->render(
                $template,
                $args
            )
        );
        
        return $dest;
    }
    
    public function build($pages){
        $beg = microtime(true);
        
        if(!file_exists($this->folder)){
            mkdir($this->folder);
        }

        foreach($pages as $page){
            $this->render(
                $page['template'],
                $page['dest'],
                $page['args']
            );
        }

        return microtime(true) - $beg;
    }
}

I then just request the data I need, use Twig as a templating engine to generate all the files I need (not just HTML, but also Javascript code or CSS), call this static file generator service by command line, and everything is written in the local /static folder.

I like to build my personal website's assets in my local environment and upload them manually to Netlify by compressing the resulting static folder.

Even though my website contains more than 500 web pages, it only takes half a second to build and another few seconds for Netlify to deploy the files. The resulting website also has a near-perfect performance score when I test it against Google Lighthouse.

CI/CD with Github Actions

The way I deploy code to production is pretty archaic. I store my build in the Git repository (yikes, but at least I don't commit my node_modules folder), log in to my web server using a secure shell (ssh), git pull the latest changes from the master branch, restart the express server, and delete the local cache and the Cloudflare cache. It works, but I sometimes run into problems‚ÄĒlike forgetting to install new yarn packages, for example‚ÄĒand the server crashes. I can do better with a proper Continuous Integration/Continuous Delivery process.

It's not the first time I look up CI/CD tools to improve my workflow, but I've always found things like Jenkins or Travis CI off-putting: you need to install it, sometimes rent an entire web server, and learn how to set up a pipeline. I did some Jenkins back in college, but I still found it to be too much trouble for my simple use case.

It's only till recently I stumbled upon Github Actions, while making my own JAMStack framework to imitate how Netlify or Vercel implement CI/CD, and I found it a breeze to work with.

Most use cases are already handled by third-party plugins, and there is nothing to pay or install to get started. In less than an hour, I had a pipeline ready to build my static website written in Preact and NodeJS, cache the node_modules folder from one build to another, and store the pre-rendered assets.

For private repositories, the first 2000 Github Actions minutes of each month are free, and then you have to pay $0.008 per minute. In my case, it only takes two minutes to install and build the first time with about 1000 blog articles, and less than a minute to rebuild: even if I were to update my website everyday, I would never have to pay anything. For public repositories, it's entirely free. You can also self-host a Github Actions server to handle more complex use cases, so in that case you'd only pay a web hosting bill.

I'm curious to learn more about Github Actions, but the next concrete step regarding CI/CD in my projects is to set up a webhook request/listener to trigger a git pull, handle caches, and restart the web server.

Code-Splitting with SSR Preact

It took me 3 days to figure out how to implement route-based code-splitting with server-side rendered Preact because of the lack of documentation, so I'm going to share with you in this article how I did it.

Preact has the same API as React with only a fraction of the size (3kb), so it's interesting to increase your app's time-to-interactive metrics while decreasing your static website generator's build time.

Traditional routing with Preact looks like this:

The single-page app is rendered, each component is downloaded, and the router displays the one corresponding to the current route.

It works well, but the more routes you have, the longer it will take to download the Javascript bundle. A better solution would be to download only the Javascript we need, which is called lazy loading, or code splitting:

Instead of loading the components, we use dynamic imports to only record the references to the components. When the app needs a component, it is downloaded asynchronously while a loading message is displayed.

That's nice, but what about server-side rendered application? Websites that can't go around search engine optimization can't afford to use a traditional single page application, so we need to make it work. This part isn't documented anywhere so it took me many trials and errors to figure it out.

You simply need to import your Router component from the server and tell it what to render, but it's a bit tricky. In ExpressJS, for example:

server.js

app.js

Basically, we tell the Router our current server-side route, and it will figure out which front-end route to render. In the front-end, we hydrate our SSR output.

If you use the code as-is, however, nothing will be rendered server-side because of the dynamic imports. Dynamic imports are asynchronous actions, so you have to wait for the results to be sent back to use them. Unfortunately, the way NodeJS parses imports make it impossible to happen: the promises are skipped entirely and you are left with the loading components.

The trick is to use the synchronous instruction require when you're on the server-side, and only use dynamic imports in the browser. To do that, we update the getComponent property of each route:

And voilà! Routes are lazy-loaded and server-side rendered, the best of both worlds! You can also leverage the incredible speed of Preact to make your application even more performant.

Database Migration With ETL

Operating a database migration is the most stressful thing you can do in the lifecycle of a web application. An application can break, but losing data is unacceptable: you need a process to prevent any data loss while minimizing service interruption.

Whether you want to change your database system, reinvent your data schema, or switch to a new provider, you'll want to follow a carefully thought-out Extract-Transform-Load (ETL) process.

Let me give you an example of how I do it at Cowriters.

The Extract step consists in configuring a remote connection to the MySQL database and download the data. The problem is that I'm dealing with hundreds of megabytes of data, so I need something called data staging to transport the data little by little. It's a bit like delivering goods: if you use a cargo and said cargo sinks, you lose everything. With data staging, you transport the goods with an army of trucks, from HQ to local warehouses: the probability to lose your products is much lower, and you can recover the lost good much more effectively. In Cowriter's case, each table is downloaded separately to local JSON files containing a few thousand rows each. If the transfer were to fail at some point, each packet of data could be recovered precisely.

The Transform phase is about reading these JSON files, mapping each field to the new database according to the desired schema, and cleaning the data we don't need. Depending on the amount of processing we need, it can also be useful to divide the data in small manageable packets.

Last but not least, we need to load the data in the target database. At this point, our data is ready to be used by our application but we have to prepare the database and tune it: indexes, foreign keys, full-text search... you name it. If nothing breaks during the incremental upload, you're good to go!

When it comes to my data, I have trust issues regarding third-party tools, so I use homemade scripts to perform all these tasks to optimize the process.

DIY Green Web Hosting

I'll be paying $320 in green web hosting between March 12 2020 and March 11 2021. $27 per month is not excessive to run multiple websites, but it's still shared hosting with only 2Gb of RAM, so I'll eventually need to upgrade.

I'm perfectly satisfied with my current hosting provider, but I can still cut costs while improving my web server's performance.

$320 is a lot of money to play with. With this amount, I could buy an 8Gb Raspberry Pi 4 Model B ($85), a USB battery ($65), and a 180W solar panel ($165). In other words, I could build my own off-grid dedicated web server running on 100% renewable energy.

It would be ideal to distribute pre-rendered offline-first websites over a free content delivery network. Using renewable energy would also mean no recurring costs, except for the high-speed Internet bill. Building a secure web server is a well-documented activity nowadays, so it wouldn't impact security either. Buying additional Raspberry Pi to build a server cluster would allow for load distribution and content replication, to provide high availability and content recovery.

The unused RAM could also be rented to other makers for an additional income source. I would love to have a cool side business that doesn't focus on software, and it would allow me to participate in the rise of a greener Internet.

I have till March 11 2021 to figure it out. I wouldn't need external funding to test out the idea since I can get started with cheaper Raspberry models (as low as $35) and smaller solar panels: I just need to find a way to increase the speed of my Internet connection and a few hours to order everything.

From Dynamic to Pre-Rendered SPA

I'm currently working on transitioning Writelier from an isomorphic app to a static-generated single page application.

The goal is three-fold: remove loading times, obtain a resilient app that doesn't need an Internet connection to work, and increase my target audience to mobile and desktop users.

As I explained in a previous article, the best way to go about building a pre-rendered SPA is to start with an isomorphic app, then implement the pre-rendering logic before handling front-end routing: it's the method with the fastest time-to-production.

I just started coding the pre-rendering logic: how the web server generates and serves webpages, and how it handles content changes. Since I can't pass data to a static webpage at runtime, some of the backend logic has to move away from the route controllers to be placed in application programming interfaces that will be called by React from the client-side. That's typically the case for all the authentication logic.

After this step, I have to implement the routing logic from React: when a user clicks on an internal link, the app has to display the corresponding page component. Ideally, components are pre-loaded before the user asks for it to avoid loading time, but each component has to be lazy-loaded to decrease the size of the application bundle (and thus increase the app's time-to-interactive score). Doing that with a pre-rendered or server-side rendered application is something I'm not familiar with, so I experimented with this concept beforehand. Code-splitting isn't easy to understand, but I think I'm getting the hang of it.

Last but not least, I have to package all the assets into a progressive web application and leverage the offline storage features provided by web browsers. I already have some experience with this part, but Writelier is on another scale. With more than 25,000 articles published by hundreds of writers, I have to be extra-careful about the offline caching mechanism. Otherwise, there is a chance that some webpages don't get updated correctly and break things.

I planned the release for October 15. Once the move to the new complete architecture is done and most bugs are fixed, I'll finally be able to launch and focus more on community and marketing.

Grids

I love grid layouts. It's my favorite type of layouts, and I do intend on using them as much as I can on Cowriters' new website. A few reasons come to mind.

Grids allow you to take some height. They are great to scan a lot of content at once and easier to navigate. If you manage to blend in all the information you need in a grid, you can browse webpages much quicker by pressing the tab key. That's not the case when you have two navigation menus and side panels to go where you want. A grid and a toolbar with a search form are all you need to browse content simply and efficiently.

Full grid layouts taking the whole width are mobile-friendly and responsive by design. You don't have to hide parts of your websites on lower resolutions or distinguish the desktop layout from the mobile one.

Grids done right are aesthetically pleasing. Cards can form patterns and are exciting to look at. Unlike regular lists, they aren't dull to watch and browse. Instagram's profiles and Explore page are a good example of that. Marketing Examples has made it an original characteristic.

A grid forces each element to be short, concise, and impactful, because the width and height of each part are limited. It's a great way to make sure the content is designed for the reader and easy to consume, or ditch when bad content is all too common.

Using a common grid to display a variety of information‚ÄĒthe way Nomad List blends in content, calls to action, ads, and announcement‚ÄĒremoves unnecessary introductory elements. The reader can just intuitively hop in and go where his curiosity guides him. After all, astonishing content is like wild sex: you have to skip foreplay and get into it. Telling the visitor what to do or providing a lengthy introduction can be a major turn-off: just place the user at the center of the experience right away and give him directions. That's probably the reason why Pornhub is designed with a grid layout.

How To Code An Apocalypse-Proof Autosave Feature

Autosave: it's 2020, you don't expect your users to manually save their  content every ten seconds. You have to regularly store the state of the  text editor to avoid data loss. An autosave call has to be debounced to reduce the load on the server. You also have to take into account  network errors: an autosave feature should be implemented with offline  in mind, using web APIs such as Background Sync and Web Storage.

How to This: Share Feature

User story: A user U can share a resource R owned by user O to a space S.

Description: How can I can reproduce Facebook's share feature?

  1. Relational database

Table User_U: id

Table Resource_R: id, fk_user_o_id (foreign key to user O)

Table SharedResource_SR: id, fk_resource_id (foreign key to resource R), fk_user_u_id (foreign key to user U), fk_space_s_id (foreign key to space S), shared_at (datetime)

Table Space_S: id

Observations: We go with a denormalized schema. The data redundancy is quite small (a regular user shares a given resource in a space once or twice) so we will prefer increasing the Read performance.

  1. SCRUD functions

Share a resource

Requirements:

  1. A resource can be shared several times in a space. (interactions should be encouraged, spam can be handled by moderators)

Pseudo-algorithm:

  1. Insert SharedResource_SR row

Unshare a resource

Requirements:

  1. Can't unshare someone else's shared resource.

Pseudo-algorithm:

  1. delete the SharedResource_SR row by id where the fk_user_u_id field and the requesting user's id correspond

Get all resources shared in a given space

SQL:

SELECT r.id, IF(sr.fk_user_u_id = '{{user_id}}', true, false) AS can_unshare FROM Resource_R as r JOIN SharedResource_SR AS sr ON sr.fk_resource_id = r.id WHERE sr.fk_space_s_id = '{{given space id}}'

Observations:

In this SQL statement we add a condition to tell the view whether or not the current user can Unshare a given text. If that's the case, we will display an Unshare button when it's appropriate.

HTML Tags: Block Elements

There are 28 block elements to write HTML documents. Block elements are by default stacked on top of each other, whereas inline elements are used inside block elements and displayed along the same line.

  1. <address> to display contact information
  2. <article> to wrap independent content
  3. <aside> to display secondary content next to an another section
  4. <blockquote> to quote another source
  5. <canvas> to draw graphics (prefer the image tag to display illustrations)
  6. <div> to group other tags (also known as generic block)
  7. <fieldset> to group form elements together
  8. <figcaption> to display a caption corresponding to a figure
  9. <figure> to tie an illustration to a caption
  10. <footer> to display a footer, generally at the end of a section
  11. <form> to add a form
  12. <h1>-<h6> to display titles and subtitles (headings) ; the bigger the digit, the more important the title
  13. <header> to display a header, generally at the beginning of a section
  14. <hr> to separate content with a horizontal rule
  15. <li> to display a list item
  16. <main> to highlight the main part of a HTML document
  17. <nav> to display a navigation bar or menu
  18. <ol> to create an ordered list (e.g numbered list)
  19. <p> to write a paragraph (default text)
  20. <pre> to preserve spaces and line breaks from a preformatted text (useful to display computer code)
  21. <section> to organize content into sections
  22. <table> to display a table
  23. <tfoot> to add a footer to a table
  24. <thead> to add a header to a table
  25. <tbody> to wrap table data
  26. <ul> to display unordered list (e.g bulleted list)
  27. <audio> to display an audio track
  28. <video> to display a video

Block elements are important to organize HTML documents in a way the browser can easily understand and interpret. A good understanding of these tags allows you to improve your website's search engine optimization (SEO) and become easier to discover on the Internet.

Image Processing and Pre-Rendering

Image processing is a huge performance pitfall in static website generators when it comes to total build time, so I don't automate this part where I can or do it at runtime when I have the opportunity.

For example, I prefer handmaking the social cards of my blog articles to make them more unique and eye-catching, but I will add a step in the webserver to create the cards if I have 1000 articles without one to show to web crawlers. If it takes 250ms to generate a 1200 * 628 image (that's about what it takes from my dev machine), it would save me 250s of build time for 1000 web pages: it's non-negligible since your total build time is the main pricing metric for static web providers like Netlify, Gatsby Cloud, or Vercel ("only" 300 free build minutes per month using Netlify, for example).

Image optimization is also an important factor to improve a website's loading speed, so static generators apply resizing algorithms, blur up techniques, and compression mechanisms to help you deliver better content. But again, these things take time, and I'd wage it's often better to open your photo editor once rather than going through the aforementioned steps every time you build your static repository.

I use Gimp or Sharp to resize my images, CSS to display loading animations, and TinyJPG or the compress-images library to compress images, depending on how many I need to deal with. I put the resulting assets in my regular picture folder and entirely skip the image processing step at build time.

Intro to HTML Tags

Tags are the building blocks of webpages, as we saw in the previous article of this series on web development.

Each tag has its own meaning. Combined with other tags, we can obtain a variety of rich documents.

Each HTML tag is interpreted by the browser and can be displayed differently as needed using Cascading Style Sheets (CSS), instructions containing presentation rules, like the position of the tag, its color, its size, and so on.

Tags inserted in the body of the HTML document (inside the <body></body> tag) will be visible by the reader.

That's how we manage to write text documents, for example.

Paragraphs are written inside <p></p> tags.

Titles are described by a series of tags from h1 to h6 (the most important title will be header 1, the second most important will be header 2, and so on):

<h1>This is the main title!</h1>

Images are inserted using the <img/> tag, links in-between <a> and </a> (a for anchor), unordered lists with <ul>, etc. There are many more tags to know to craft texts, but I will let you discover them yourself.

These tags could be called "primitive" tags, because they have a meaning by themselves. There are seven other "composite" tags that are meant to wrap others, to organize the overall semantic structure of an HTML document and create advanced design layouts: header (the header of an article, a card, etc.), nav (navigation, typically a navigation bar), footer (at the bottom of a website, where you can find contact information), section, main (main part of a web document), article, and aside (additional information).

You will also find generic tags that have no semantic meaning but can be used to design the document: <span> for textual data, and <div> for block-level design.

Using all the previously mentioned tags, a webpage might look like this:

<!DOCTYPE html>
<html>
  <head>
    <title>A webpage</title>
  </head>
  <body>
    <header>
        <h1>A title</h1>
    </header>
    <nav>
        <a href="/">Home</a>
        <a href="https://twitter.com/BasileSamel">Twitter</a>
    </nav>
    <main>
        <div>
            <article>
                <p>This is an article paragraph</p>
                <img src="picture.jpg"/>
            </article>
            <article>
                <p>This is a second article</p>
                <img src="picture2.jpg"/>
            </article>
        </div>
    </main>
    <footer>
        <p>© Basile Samel, 2020</p>
    </footer>
  </body>
</html>

Lazy Hydration with React

The main performance pitfall with server-side rendered code occurs when it's time to hydrate the components with React.

When you receive HTML from a server, React has no way to interact with it. Hydration is the phase where your server-side rendered React components become interactive.

When you perform a Lighthouse test, hydration is taken into account in the Time-to-Interact indicator: the longer it takes, the more it negatively impacts your score.

As a rule of thumb: the bigger the bundle, the longer it will take. Decreasing your bundle size is often arduous though because performance improvement often results in functional sacrifices.

Lazy hydration remains much easier to implement and brings immediate returns. It consists in dividing the hydration process to prioritize elements above the fold: if the component can't be seen, and thus interacted with, we don't need to hydrate it just yet so we delay it for later.

The impact is proportional to the size of your Document Object Model, so it won't be as meaningful if the webpage is short.

In React, the simplest way to implement lazy hydration is to use the react-lazy-hydration package. It's downloaded 15,000 times per week on average, so you can't go wrong. It also gives you the ability to skip hydration entirely for the static parts of your components, which is a non-negligible performance boost as well.

Making Your First Website

Having your own website has never been easier.

Money isn't a problem anymore. You can host any website for free with no recurring monthly fee using providers like Netlify, Github Pages, or Vercel. Domain names can be found for less than $10 per year on Porkbun, GoDaddy, or Google Domains.

Coding isn't a problem anymore either. Tools like Stackbit or NetlifyCMS make it easy to own a custom blog or personal website without hiring a web developer or learning how to program.

You can have a state-of-the-art website running in less than a minute with Stackbit. And yet, very few people have their own. I wonder why, in an economy where attention is so valuable yet hard to get, there aren't much more website owners.

Is it apprehension? A lack of self-confidence? Failing to see the benefits against social networks and managed platforms? A matter of education? A lack of features?

All of the problems I just quoted can be solved by either doing your own research or combining different services together. You don't need to have everything figured out beforehand. In fact, I never do: I just learn as I go. Keep improving your website little by little, and your skills will grow along with your self-assurance.

More Voice Typing Experiments

I'm writing this article with voice typing to get used to thinking out loud.

One of my goals before the end of the year is to publish coding-related content on Tiktok and Youtube, but I feel really self-conscious talking out loud. The first step is to get used to my own voice.

My family is probably thinking I'm mad for talking to myself, but the end goal is worth the ridicule.

I want to purchase a license for a Dragon Speech Recognition software and a dictation machine in the future‚ÄĒto write while going on walks‚ÄĒbut I don't think the investment will be worth it for now. I must start with the material I already have and develop the habit before indulging in fancier tools.

We live in an age where our voices are our most valuable assets: video content marketing is taking over, dictation allows you to write more than you will ever dream of, and in a few years our whole work environment will be powered by AI voice recognition software. If we don't get used to it now, we probably will be left behind sooner than expected, just like pen and paper have now been largely replaced by screens and keyboards.

Google is surprisingly good at picking up my weird French accent, even though I still have to make a conscious effort to articulate and talk louder. It can't be bad to improve my speaking skills, I tell myself.

In less than 10 minutes, I'm done with my daily 200 words and I can move on to more pressing matters, or write some more. My setup is simple: a lavalier microphone mounted on a camera tripod, directly linked to my laptop with an open Google Docs window. When the words stop flowing and I feel ready to call it a draft, I copy and past my text to Writelier to edit it and hit publish.

I just need to get used to keeping a Google Docs tab opened to take notes whenever the inspiration strikes.

It definitely feels like I have a personal assistant, which is great considering the fact that I already have a huge workload.

MySQL: Fetching Many-To-Many Relationships

Fetching entries from Many-To-Many relationships is up to ten times faster and much more readable/easier to code using multiple SELECT statements than JOINs.

I've been wondering for a few months how I could make complex queries more performant and readable, so today I decided to perform a little experiment.

There are basically three ways to fetch a text and its tags using MySQL.

Using a JOIN takes about 36.7s if I repeat the operation 100,000 times. The code looks like this:

const data = await mysql.query(`
  SELECT tv.title as text_title, tv.id, t.id as text_id, c.uuid, c.title 
  FROM TextVersion tv 
  LEFT JOIN Text t ON t.id = tv.text_fk 
  LEFT JOIN text_category tc ON tc.text_id = t.id 
  LEFT JOIN Category c ON tc.category_id = c.id 
  WHERE tv.id=26
`)

const result = {
  id: data[0].id,
  title: data[0].title,
  text_id: data[0].text_id,
  categories: data.map(c => { return {
    title: c.title, 
    uuid: c.uuid
  }})
}

With a concatenated subquery result, I obtain 45s, which is 18.5% slower than the previous solution. It's the method I used so far because I couldn't figure out how to make JOINs work and look good with multiple Many-to-Many relationships:

const data = await mysql.query(`
  SELECT tv.title, tv.id, t.id as text_id, 
    (
    SELECT 
      CONCAT( '[', 
        GROUP_CONCAT(CONCAT(
          '{"title":"', c.title, '"}', ',', '{"uuid":"', c.uuid, '"}' 
         )), 
      ']'
    ) 
    FROM Category c 
    LEFT JOIN text_category tc ON tc.category_id = c.id 
    WHERE tc.text_id = t.id) as c 
  FROM TextVersion tv 
  JOIN Text t ON t.id = tv.text_fk 
  WHERE tv.id=26
`)

const result = {
  id: data[0].id,
  title: data[0].title,
  text_id: data[0].text_id,
  categories: JSON.parse(data[0].c)
}

The last solution I came up with was to use multiple SELECT statements. It took 40.1s (10% slower) to fetch and process the data to the format I needed:

const data = await mysql.query(`
  SELECT tv.title, tv.id, t.id as text_id 
  FROM TextVersion tv 
  JOIN Text t ON t.id = tv.text_fk 
  WHERE tv.id=26; 

  SELECT c.title, c.uuid  
  FROM Category c 
  LEFT JOIN text_category tc ON tc.category_id = c.id 
  LEFT JOIN TextVersion tv ON tv.text_fk=tc.text_id 
  WHERE tv.id=26
`)

const result = {
  id: data[0][0].id,
  title: data[0][0].title,
  text_id: data[0][0].text_id,
  categories: data[1].map(c => { return {
    title: c.title, 
    uuid: c.uuid
  }})
}

The first solution obviously performs better at scale. The literature almost always advise to use JOINs, so no surprise here.

But when I try decreasing the number of iterations, I found out that multiple SELECT statements are equivalent or outperform JOIN queries.

They are equally fast at 12,000 iterations, 50% faster at a hundred, 10 times faster with 10 iterations, and 60 times faster when performing a single query!

@craigpetterson mentioned on Twitter there is perhaps something wrong with my indexes, but adding composite indexes on the foreign keys doesn't seem to change the results.

JOINs do scale better. But according to my benchmark, multiple SELECT statements are easier to read/code, require less data (no data duplication) to send over the wire, and outperform JOINs when you send fewer database requests, which is the case of most apps and static-generated webpages.

In conclusion, I'll try to prefer using multiple SELECT statements rather than complicated JOIN until I manage to perform further tests and see how the queries behave in production.

New Blog Engine

I'm completely re-designing my personal website to develop my audience and my activities. I want to centralize all my online content and products in one place that feels personal.

The hard thing about owning a website is not launching it, it's keeping it updated.

When someone types your name on Google, your personal website shows up first. It has to be the very reflection of who you are, what you stand for, and where you are headed. No one has time to hope around different social networks and tech products to see what you are up to: you are your best curator.

Owing to those two last points, you want a custom website that can easily be refreshed.

Instead of going for a heavy and cumbersome solution such as Wordpress, I went for a static website generator called Gatsby.js. The website is coded with Javascript (React framework), HTML, and CSS. The articles are written in Markdown. The whole app is then hosted on Github and deployed to Netlify for $0. All I need to pay is a domain name: $10 a year for BasileSamel.com.

The main reason why I chose this tech stack is Continuous Integration. I just make a modification to my local Git repository and push it to the World Wide Web in a single command line. It's never been easier (and cheaper) to update a home-made website.

I don't need to buy an additional web server because I can always redirect my users to another relevant service if I need them to interact with me. For example, I redirect them to Telegram to communicate, to Mailchimp to subscribe to my newsletter, or to Buy Me A Coffee to tip me.

The simplest solution is usually the one you are going to keep around.

Nginx For JAMStack Apps

Pre-rendering your web pages following the JAMStack methodology is a sure way to decrease your time-to-first-byte metric, but dropping your NodeJS server when you can is even better.

In any app, you need a webserver to send content back, but not all web servers are equal: a web server powered by Javascript like Express will be 6.5 times slower than one written in C, according to a recent study. 27 times if you go with PHP, 60 with Ruby, and 70 with Python.

Even among web servers written in C, the performance difference can be huge: Apache is 2.5 times slower than Nginx!

It's in your best interest to use the most performance technology to not only decrease your costs (less RAM consumption) but also to bring the best experience to your user.

Great things happen with Nginx when you set it up as a reverse proxy to handle all incoming requests. You can for example tell Nginx in 6 lines of code to send static webpages if they exist, and fallback to a NodeJS server when it needs to. This way, you can for example have a prerendered website in one folder, and a full-blown backend application running in the background that you can call from within your web components: it's the best of both worlds, without the monolithic architecture.

Using Nginx as a reverse proxy is also how you can implement features like Gatsby's incremental builds, which are only available on paid plans.

I'm currently running experiments with a $3/month cloud hosting provider called Hetzner that allows me to set up my own virtual web server, and so far the aforementionned configuration works really well for a prerendered SaaS app. I don't need much RAM since most of the content is served with Nginx and I don't have any expensive NodeJS functions to run, but I can still rebuild and scale my website as much as I want without fearing huge bills. It feels truly magical.

On Auth

Authentication is how the app identifies a user, while authorization tells the app what the user can and cannot do. Both are essential to web applications to store data produced by a third-party. Authentication and authorization (auth for short) aren't simple things, but they are easy to implement‚ÄĒand partly automate the coding involved‚ÄĒonce you get the hang of how they work.

Auth can be divided into 5 main components:

  • A web server to handle all the logic.
  • A database to store encrypted credentials.
  • A session cookie stored in a web browser to tell the web server we are still using the app.
  • Short-lived JSON Web Tokens (JWT) to perform API calls from within the user interface.
  • A private API key that can be exchanged against web tokens to make API calls between different domain names.

When a user needs to be authenticated to perform an action, you usually send him to a login form. The web server compares the identity and the password entered in the form to the ones stored in the database. If there is a match, the server creates a session cookie on the visitor's computer, to tell it "Keep me logged in, I'm currently using the website!"

When the user is done doing his business, he can click on a Logout link that will eat the cookie and burn the bridge that used to stand between him and the machine.

Of course, none of this would work if the user didn't previously subscribe by choosing at least an email address and a password to identify herself. The password is encrypted by the web server to prevent bad people from doing bad things on a user's behalf.

The great thing about session cookies is that they can only be read by the web browser that requested one and the server that created them. But it's also an issue when you need to communicate with other web servers through an API.

In this situation, we use a private API key to identify an unknown request. The API key is unique and can be translated into a JSON Web Token to perform tasks offered by a web service.

We also use session cookies as a currency to be exchanged against JWTs to uniformize the application programming interface, thus making it simpler to use for everyone involved. That's how you can make secure API calls from within a user interface belonging to the same domain name.

Authorization data is stored in the database along with the corresponding user. The software simply has to retrieve the user information from the cookie session or the JWT and check that the request has the right permission levels to perform an action.

That's basically how I do it in the new Cowriters website. I don't use any fancy framework like passport.js, because I find it much harder to configure and way too heavy to load for my simple use case. Instead, I use a combination of smaller libraries, taking inspiration from a few tutorials: this one about JWTs, and this other one with Session Cookies. I adapted both to work with my ExpressJS + MySQL setup, and that was it! It took about 8 hours to learn and implement everything, and I can now move it to its own encapsulated web service in my webserver to improve it over time or reuse it across multiple applications.

On Transcription Software

Interviews are a great way to create original content, but they aren't easy to facilitate and distribute. You need great questions, a flow, and a directing line. If the interview is in an audio or video format, you need timestamps, notes, and transcriptions.

Transcriptions are especially important. As a host, even if you publish the interview as-is, you'll need transcripts to improve your SEO. If you need to go through the whole interview to produce an article, it's way faster to transcribe the video directly and work from there.

The average human's typing speed is somewhere between 40 and 75 words per minute. Speaking is twice faster, from 110 to 200 words per minute, but it's nowhere quicker than reading: between 200 and 450 wpm! This is why I always prefer reading rather than listening when I have a choice, especially if I need to take notes or work with audio material.

The problem is that transcription services aren't cheap. According to Google, a professional transcriptionist makes $90-180 per audio hour. Automated transcription software services are cheaper, but it's still at least $12 per hour.

I have a mission going on at the moment where I need to make an article from a 50-minute long video interview. It would take me about two hours to transcribe and take notes from it if I were to do it manually. I'm paid $25 per hour, so if I were to pay for a transcription software service, I would lose 50% of my paycheck. I'm also paid by the number of words I publish, so it's in my best interest to write faster (the more words per hour, the bigger my hourly rate), but 50% is not an acceptable loss ratio, in my opinion.

I have two solutions. I can either build my own transcription engine, or I can use a low-level transcription API.

Building my own engine would mean using something like TensorFlow's DeepSpeech and feed it data. Having studied the basics of machine learning in college, I know that training your own models is not a trivial task. I might try it later, but for now, I need something I can quickly use. Hence my decision to go for a low-level transcription API.

After some brief research, I settled for Google Cloud Speech-to-Text's API. You can try a demo on the landing page, so I know it's accurate for my use case. According to Google, the error rate is 5%, which is the standard for most speech-to-text models out there. The best part is the pricing: $0.024 per min, or $1.44 per hour. You only pay for what you use, and you have one hour free every month.

In other words, I can build my own local transcription service at a tenth of the cost most service providers ask for, and I can opt-out anytime. Once the algorithm does the bulk of the work, I can then use a free tool like oTranscribe to correct the last few mistakes.

Let's see how it goes and I'll write a tutorial about it later this week.

Organizing SCRUD code in Symfony 5

Writing SCRUD (search, create, read, update, delete) code is a very common thing to do when working on your back-end. The SCRUD logic spans the entire MVC (Model, View, Controller) architecture, all the way from API endpoints to database systems.

This post will describe how I organize this code in my Symfony applications to make it clear and performant. The use case for this example is simple: I want to manage blog posts in a database using an API.

It always starts with the database layer. Symfony leverages a tool called Doctrine to abstract away interactions with the database. You just create an object called an Entity, and Doctrine will take care of giving you an API to manage the rows of the relevant tables.

src\Entity\BlogPost.php

<?php 
namespace App\Entity; 

use Doctrine\ORM\Mapping as ORM;

/** * @ORM\Entity 
* @ORM\Table(name="Text") 
*/ 
class Text{    
/**     
* @ORM\Column(type="integer")     
* @ORM\Id     
* @ORM\GeneratedValue(strategy="AUTO")     
*/    
protected $id;    

/**     
* @ORM\Column(type="string", length=50, unique=true, nullable=true)     
*/    
protected $uuid;    

/**     
* @ORM\Column(type="datetime", nullable=true)     
*/    
protected $published_datetime;    

/**     
* @ORM\Column(type="string", length=255, nullable=true)     
*/    
protected $title;    

/**     
* @ORM\Column(type="text", nullable=true)     
*/    
protected $content;    

public function __construct(){        
    $this->uuid = Uuid::uuid4()->toString();    
}    

// GETTERS    
public function getId(){        
    return $this->id;    
}    

public function getUUId(){        
    return $this->uuid;    
}    

public function getPublishedDatetime(){        
    return $this->published_datetime;    
}    

public function getTitle(){        
    return $this->title;    
}    

public function getContent(){        
    return $this->content;    
}    

// SETTERS    
public function setPublishedDatetime($published_datetime){     
    $this->published_datetime = $published_datetime;        
    return $this;    
}    

public function setTitle($title){        
    $this->title = $title;       
    return $this;    
}    

public function setContent($content){        
    $this->content = $content;        
    return $this;    
}    

// FUNCTIONS    
public function toArray(){
return [            
    'id' => $this->id,            
    'uuid' => $this->uuid,            
    'published_datetime' => $this->getPublishedDatetime()->format('Y-m-d H:i:s'),            
    'title' => $this->title,            
    'content' => $this->content        
];    
}    

public function toJSON(){        
    return json_encode($this->toArray());    
}    

public function __toString(){        
    return json_encode($this->toArray());    
} 

} 
?>

The Entity classes contain table fields in the form of protected properties, as well as getters and setters. I also usually add some functions, like toArray(), which are often used to send data to a Javascript client. Doctrine can consume Entity classes to generate SQL code that will be used to manage our database.

We need another layer to perform the SCRUD functions themselves, also known as the Service layer.

In Symfony 5, we can split the service logic into two classes: a Repository to retrieve data from the database (search and read functions), and a Service class to create and update our entities while performing type checks.

/src/Repository/BlogPostRepository.php

<?php 
namespace App\Repository; use App\Entity\BlogPost; 

use Doctrine\Bundle\DoctrineBundle\Repository\ServiceEntityRepository; 
use Doctrine\Common\Persistence\ManagerRegistry; 

class BlogPostRepository extends ServiceEntityRepository {    

public function __construct(ManagerRegistry $registry)    {        
    parent::__construct($registry, BlogPost::class);    
}    

public function search($args){        
    $filters = '';        
    if(isset($args['query'])){              
        $filters .= "title LIKE '%:query%'";        
    }        
    $limit = !empty($args['limit']) && is_integer($args['limit']) ? $args['limit'] : 20;        
    $offset = !empty($args['offset']) && is_integer($args['offset']) ? $args['offset'] * $limit : 0;        

    $conn = $this->getEntityManager()->getConnection();        
    $stmt = $conn->prepare("        
        SELECT        
        *        
        FROM BlogPost p        
        WHERE ". $filters ."        
        ORDER BY published_datetime DESC        
        LIMIT {$limit} OFFSET {$offset}        
    ");        

    if(isset($args['query'])){
        $stmt->bindValue(':query', $args['query']);
    }

    $stmt->execute();        
    return $stmt->fetchAll();    
} 
}

The Repository class allows us to query data using native (findOne, findOneByTitle, findAll, etc.) or custom functions (search). I often write custom functions using raw SQL to improve the performance of my queries. For complex requests using several joins, the speed can be increased twofold by not relying on the ORM abstraction. It's often important to decrease the loading time of intricate webpages.

/src/Service/BlogPostService.php

<?php 
namespace App\Service; 

use App\Entity\BlogPost; 

class BlogPostService{    

public function create($args){        
    return $this->setFields(new BlogPost(), $args);    
}    

public function update(Text $entity, $args){        
    return $this->setFields($entity, $args);    
}    

private function setFields($entity, $args){        
    if(isset($args['title'])){            
        $entity->setTitle(htmlspecialchars($args['title']));        
    }        
    if(isset($args['content'])){     
        $entity->setContent($args['content']);        
    }        
    if(!empty($args['published_datetime'])){            
        $entity->setPublishedDatetime($args['published_datetime']);        
    }        
    return $entity;    
} 
}

The Service class calls the relevant setters to change a database row's state. Each function returns the updated entity to be persisted in the database by the entity manager at the Controller layer.

/src/Controller/API/BlogPostController.php

<?php 
namespace App\Controller\API; 

use Symfony\Component\Routing\Annotation\Route; 
use Symfony\Bundle\FrameworkBundle\Controller\AbstractController; 
use Symfony\Component\HttpFoundation\Request; 
use Symfony\Component\HttpFoundation\JsonResponse; 
use Sensio\Bundle\FrameworkExtraBundle\Configuration\IsGranted; 

use App\Entity\BlogPost; 
use App\Service\BlogPostService; 
use App\Repository\BlogPostRepository; 

/** 
* @Route("/api/blog-posts") 
*/ 
class BlogPostController extends AbstractController {   
 
/**    
* @Route("/", name="api_blog_posts_search", methods={"GET"})    
* @IsGranted("ROLE_USER")    
*/    
public function api_blog_posts_search(        
    Request $request,        
    TextRepository $repository    
){        
    try{            
        return new JsonResponse(['ok' => true, 'posts' => $repository->search(['query' => $request->query->get('query')])], 200);        
    } catch(\Exception $e){            
        return new JsonResponse(['ok' => false, 'error' => $e->getMessage()], 500);        
    }    
}    

/**    
* @Route("/{uuid}", name="api_blog_posts_read_one", methods={"GET"})    
* @IsGranted("ROLE_USER")    
*/    
public function api_blog_posts_read_one(        
    BlogPost $post,        
    Request $request    
){        
    try{            
        return new JsonResponse(['ok' => true, 'post' => $post->toArray()], 200);        
    } catch(\Exception $e){            
        return new JsonResponse(['ok' => false, 'error' => $e->getMessage()], 500);        
    }    
}    

/**    
* @Route("/", name="api_blog_post_create", methods={"POST"})    
* @IsGranted("ROLE_USER")    
*/    
public function api_blog_post_create(        
    Request $request,        
    BlogPostService $service    
){        
    try{            
        $post = $service->create([                
            'title' => $request->request->get('title'),                
            'content' => $request->request->get('content')            
        ]);            
        
        $em = $this->getDoctrine()->getManager();         
        $em->persist($post);            
        $em->flush();            
        
        return new JsonResponse(['post' => $post->toArray()], 200);        
    } catch(\Exception $e){            
        return new JsonResponse(['ok' => false, 'error' => $e->getMessage()], 500);        
    }    
}    

/**    
* @Route("/{uuid}", name="api_blog_post_update", methods={"PUT"})    
* @IsGranted("ROLE_USER")    
*/    
public function api_blog_post_update(        
    BlogPost $post,        
    Request $request,        
    BlogPostService $service    
){        
    try{                
        $params = json_decode($request->getContent(), true)['text'];                
        $post = $service->update($post, [                    
            'title' => $params['title'],                    
            'content' => $params['content']                
        ]);                

        $em = $this->getDoctrine()->getManager();                
        $em->persist($post);                
        $em->flush();            

        return new JsonResponse(['ok' => true], 200);        
    } catch(\Exception $e){            
        return new JsonResponse(['ok' => false, 'error' => $e->getMessage()], 500);        
    }    
}    

/**    
* @Route("/{uuid}", name="api_blog_post_delete", methods={"DELETE"})    
* @IsGranted("ROLE_USER")    
*/    
public function api_blog_post_delete(BlogPost $post){        
    try{            
        $em = $this->getDoctrine()->getManager();            
        $em->remove($post);            
        $em->flush();            
        return new JsonResponse(['ok' => true], 200);        
    } catch(\Exception $e){            
        return new JsonResponse(['ok' => false, 'error' => $e->getMessage()], 500);        
    }    
} 
}

The Doctrine manager has to be invoked in the controller to avoid any premature interaction with the database that would result in a loss of performance. If we need to update several tables in the future, the flush() function will only be called once thanks to this design. This is also why there is no delete() function in the Service layer, since it only necessitates a call to the entity manager.

And voilà, that's how one can obtain a functional RESTful API using clear concise code with Symfony 5.

Organizing SCRUD code in Symfony 5 - Part 1/3

Writing SCRUD (search, create, read, update, delete) code is a very common thing to do when working on your back-end. The SCRUD logic spans the entire MVC (Model, View, Controller) architecture, all the way from API endpoints to database systems.

This post will describe how I organize this code in my Symfony applications to make it clear and performant. The use case for this example is simple: I want to manage blog posts in a database using an API.

It always starts with the database layer. Symfony leverages a tool called Doctrine to abstract away interactions with the database. You just create an object called an Entity, and Doctrine will take care of giving you an API to manage the rows of the relevant tables.

src\Entity\BlogPost.php

<?php 
namespace App\Entity; 

use Doctrine\ORM\Mapping as ORM;

/** * @ORM\Entity 
* @ORM\Table(name="Text") 
*/ 
class Text{    
/**     
* @ORM\Column(type="integer")     
* @ORM\Id     
* @ORM\GeneratedValue(strategy="AUTO")     
*/    
protected $id;    

/**     
* @ORM\Column(type="string", length=50, unique=true, nullable=true)     
*/    
protected $uuid;    

/**     
* @ORM\Column(type="datetime", nullable=true)     
*/    
protected $published_datetime;    

/**     
* @ORM\Column(type="string", length=255, nullable=true)     
*/    
protected $title;    

/**     
* @ORM\Column(type="text", nullable=true)     
*/    
protected $content;    

public function __construct(){        
    $this->uuid = Uuid::uuid4()->toString();    
}    

// GETTERS    
public function getId(){        
    return $this->id;    
}    

public function getUUId(){        
    return $this->uuid;    
}    

public function getPublishedDatetime(){        
    return $this->published_datetime;    
}    

public function getTitle(){        
    return $this->title;    
}    

public function getContent(){        
    return $this->content;    
}    

// SETTERS    
public function setPublishedDatetime($published_datetime){     
    $this->published_datetime = $published_datetime;        
    return $this;    
}    

public function setTitle($title){        
    $this->title = $title;       
    return $this;    
}    

public function setContent($content){        
    $this->content = $content;        
    return $this;    
}    

// FUNCTIONS    
public function toArray(){
return [            
    'id' => $this->id,            
    'uuid' => $this->uuid,            
    'published_datetime' => $this->getPublishedDatetime()->format('Y-m-d H:i:s'),            
    'title' => $this->title,            
    'content' => $this->content        
];    
}    

public function toJSON(){        
    return json_encode($this->toArray());    
}    

public function __toString(){        
    return json_encode($this->toArray());    
} 

} 
?>

The Entity classes contain table fields in the form of protected properties, as well as getters and setters. I also usually add some functions, like toArray(), which are often used to send data to a Javascript client. Doctrine can consume Entity classes to generate SQL code that will be used to manage our database.

Preact

It took longer than expected, but I managed to overcome all the technical locks I found using Preact as a frontend framework.

Preact is like React, while weighing much less. It's an extremely important factor to take into account when you build websites, since a big bundle size reduces your page speed and thus your Google Lighthouse score.

VueJS is a good alternative to Preact, but I didn't want to learn a different API.

Preact also has a compatibility layer to adapt to vanilla React dependencies. DraftJS was the only incompatible library, so I switched to SlateJS. I'm more than happy with the latter, even though I only scratched the surface. It takes less time to develop with, it's easily customizable, and the code is much clearer than DraftJS'.

I also managed to implement server-side rendering and route-based code-splitting, which is far from well-documented. Now that I have a single central bundle and a bunch of chunks, I just have to refactor Writelier's webpages to be pre-rendered and implement service workers.

Replacing React with Preact allowed me to take my heaviest webpage from a 92/100 Performance score to 96/100. Adding pre-rendering should make it perfect. Article pages are already scoring 100/100, so it will be optimal for SEO. I just have to measure how long it will take to build 30,000 webpages, but the first results I obtained with React (< 2 minutes) are promising.

Redesigning my Personal Website

My personal website is not well-kept, its potential is untapped. My motives changed a lot in one year, my online domain should reflect those.

My current goal is to reach ramen profitability. At $1000 per month, I need 500 people to buy a $2 product from me. I have three income sources at the moment: my 200 Words a Day patrons (90% of my total revenues), the revenues from my ebook (10%), and my new Patreon account (0%, not launched). The objective is to design the flow of the website to convince people to help me help them.

You need three things to persuade someone online: added value, utmost transparency, and good copywriting.

I add value by investing my time in the development of important web products and by sharing what I learn in my daily writings. I also use social proofs to make my personality stand out.

I become transparent by openly reporting my goals, my values, and my metrics (costs/revenues breakdowns, daily active users...). I stay open and accessible.

Finally, I need stellar copywriting. Most portfolio websites feel generic and bland, but my focus is on inspiring the reader. To do so I follow a Golden Circle structure emphasizing the pain points I'm trying to address through my work. I'm pretty enthusiastic about the first principle approach to problem-solving.

This new website should be up tomorrow.

Svelte

I discovered Svelte, yet another Javascript framework, a month ago or so. I've been using Preact for a few months now, but I decided to jump ship after trying Svelte yesterday: it's simply doing better at what I want it to do.

First, Svelte doesn't use a virtual DOM, so its bundle size and time-to-interactive are smaller. 25% smaller bundles compared to Preact from the official benchmark, but I'm already seeing better results with my toy project. It's easy to split code almost out of the box using Webpack and import statements, and everything is done at build time.

Regarding static site generation, Svelte doesn't rely on JSX, so rendering components to HTML strings is way faster and it doesn't require much configuration‚Äďonly a tiny webpack script. For this reason, it scales way better than Preact when you have to generate thousands of pages. My webpack build also runs blazingly fast because it doesn't rely on things like Babel presets, and that feels great when I need to tweak code and get instant feedback.

Last but not least, I love the syntax. It's just pure HTML and Javascript with no fluff, minimalist, and easy to read and write. It simply feels more native to the Web because it leverages existing technologies instead of adding abstraction layers. Being able to use lightweight vanilla Javascript libraries without writing a React adapter is refreshing.

Svelte also has its own NextJS-like framework called Sapper. I don't plan to use it because I need to have total control over how my static webpages are built, but I definitely see it becoming a faster alternative to Next and Gatsby.

Svelte for SSG

Today I finished moving the static site generation logic from Preact to Svelte, and I'm positive it was a great move.

Everything is substantially faster: the Rollup build takes 2 seconds instead of 10 with Webpack (+80% improvement) and generating 25k blog posts takes only 22 seconds against 120 previously (+82% speed). At Netlify, a build minute is priced between $0.014 and $0.019 ($19 for 1000 minutes, then $7 for an additional 500 minutes), so switching from Preact to Svelte would literally save me 80% on my monthly bill, or $19 per month if it allows me to stay on the free plan. Not bad for a solo indie business, and if I was using Netlify.

Even though the documentation isn't optimal yet, I managed to implement all of the important features I needed to generate efficient websites: route-based code-splitting, server-side rendered (aka SSR) routing, hybrid rendering (SSR + client-side hydration), service worker creation, and TailwindCSS support. There is definitely an opportunity to write technical content about this futuristic yet minimalist Javascript framework.

I also managed to go below the 2-second mark for the time-to-interactive metric, 1.2s exactly. It's twice faster than my previous Preact code, but it gives me in both cases a perfect 100/100 Lighthouse score. All of the other Performance metrics are below 1 second, and the max first input delay is below 200 ms. Perfect user experience, and perfect for search engine optimization.

It will take me a few more weeks to find all the niceties of Svelte, but I feel already pleased and will suggest it to every web developer I have the chance to talk with.

Switching to NodeJS

I pulled the trigger on PHP to focus on Javascript.

It wasn't an easy choice to make, but I found many good reasons to spend some extra time migrating Cowriters from Symphony to NodeJS.

Performance has been my main focus over the last few months. I want Cowriters to have the best CMS/rendering engine there is, not only for SEO and the increased traffic it can generate, but also to provide a great user experience.

Node performs significantly better than PHP when it comes to web applications. Its non-blocking I/O model is great for traffic-intensive websites, and allows for interesting features like real-time messaging (hello Cowriters' very own chat app) or concurrent request management (hello fast collaborative features). The web is asynchronous by definition, so it will be a great fit for what I envision at Cowriters.

I'm progressively moving to a JAMStack architecture, with heavy use of browser APIs. Using Symfony in association with a technology like React leads to code duplication when I try to pre-render webpages, and switching from one language to another is cumbersome, so I'll increase my development speed tremendously by writing in Javascript the whole time. Since Javascript isn't going anywhere any time soon, I might as well embrace it for everything.

On a personal note, I'm also taking into account my future ability to work on exciting projects, and PHP doesn't seem to be used by companies I'm interested in. It's the most mainstream language out there, and Wordpress still represents 20% of the Internet, but I just don't think I'll ever wish to work on Wordpress plugins. All the cool startups use frameworks like Next, Gatsby, and other Javascript frameworks, and more and more machine learning tools using NodeJS are also seeing the light of the day, which is probably never going to happen with PHP.

Last but not least, I want to be able to laugh at jokes about PHP developers. I don't quite get them at the present moment :)

Tech Stack to Write a Book

I wrote Alter-Nomad using Markdown and Pandoc. That's the tech stack I found after researching how to write a book using Markdown.

The benefits are nice: Markdown allows me to focus on the content rather than the way it looks, it's easy to manage the different drafts using Git versioning, and Pandoc makes it easy to obtain a finished product in a single command line. It was nice and easy to use, and it did the job.

The problem is that it's quite hard to customize. Pandoc uses a Latex generator to generate references, and it uses Cascading Style Sheets to design the resulting files.

I had a much better idea two days ago.

If you think about it, an ebook is basically a Progressive Web App: you want to be able to read it both offline and online, and it has to be generated from code to ease the writing process. The idea is to use a static file generator like GatsbyJS to write, version, publish, and compile a rich ebook.

Each section of the book is written in Markdown. References are injected at build time using Markdown variables and a custom helper function. Styling is handled using SaSSified CSS. NodeJS is used to output HTML by GatsbyJS, and thus we can use this HTML coupled with our custom CSS to create rich PDF, mobi, and epub files.

It's quite the developer's dream I think, so I'm going to release my custom config as an open-source GatsbyJS starter for everyone to use once I'm done implementing it.

Testing a New Hosting Provider

My current hosting provider isn't optimal since I moved away from PHP to code with NodeJS, so I'm testing a new hosting provider suggested to me by @phaidenbauer: Hetzner.

I love the fast interface, the affordable prices, and the company values. I choose a Cloud server based in Finland that only costs me $3 per month with plenty of bandwidth and storage space, and it should be enough to address Writelier's current needs according to my measures.

I'm not familiar with Cloud hosting, but it's not much different than shared or VPS hosting from what I've seen so far. In my case, I have root access to a Debian image to install whatever I want or customize my stack. It's pretty life-changing. My current host isn't adapted to NodeJS, and it should drastically improve the performance of my apps and websites.

The best part is my server will still be running on 100% renewable energy (wind and hydropower), which is something I don't want to do without.

Thanks to root access, I'll be able to optimize my web server to serve static content faster. I'm planning to use a cluster of NodeJS instances coupled with Nginx as a reverse proxy to load-balance the traffic, so it should also improve availability.

I don't plan to make the switch soon though, since I'm still recovering from the migration I performed on September 10. I'll just transfer a domain name, do some experiments and benchmark stuff.

Why A MERN Stack For Your Business

There are three main reasons why I'm settling down on MERN to build my web apps: it's performant, it gives me freedom and flexibility to do whatever I want, and it allows me to use cutting-edge technology to create better user experiences.

First, Javascript is way more performant than traditional web alternatives in terms of energy consumption and execution speed: 6 times faster than PHP, and 12 times faster than Ruby or Python.

Go and Java have twice the speed with only a third of the electric consumption, but it's an absolute hell to create isomorphic (server-side rendered HTML and hydrated with Javascript code on the client) applications with, so you end up duplicating your code across different languages and almost doubling your development time. Javascript is still mandatory in 2020, so doing everything with Javascript and using a front-end framework like React, Vue, or Angular as a backend templating engine is easier to learn, setup, and maintain. You can use the same libraries on both sides, and everything will be far easier to put in a container. Even if you use an adapter to write web components from your Django/Laravel/Rails backend, you'll still need NodeJS to bundle Javascript code.

MongoDB, thanks to its flexibility, is perfect for a app whose data schema is bound to change fast. Data integrity is great, but it's always preferable for a new-born business to be able to move fast, even if it implies breaking things.

Last but not least, you can do everything with MERN: SaaS, JAMstack prerendered websites, and even machine learning. All the best tools have NodeJS software development kits, so you can confidently write software without fearing it will go unmaintained in a few years. NodeJS's package manager, npm, is by far the most active repository of libraries accross every programming language: 923 new libraries are created every day on average using npm, which is 3 times more than PyPi (Python), Packagist (PHP), and Rubygems (Ruby) combined.

If you're serious about becoming a fullstack developer, I would recommend MERN without hesitation.

Writing a Book with GatsbyJS

Typewriters are almost gone. Software has become the norm, but when you take a look at the list of the most popular book writing softwares, you find expensive What You See Is What You Get (WYSIWYG) editors like Google Docs, Microsoft Word, or Libre Office_._ WYSIWYM remains a techie's tool.

Latex has dominated academia for several years since its first release in 1985, even before Tim Berners-Lee's invention fo the World Wide Web in 1989. Latex wasn't designed for web consumption. In 2004, Markdown established itself as a more minimalistic approach to publishing HTML documents using a markup language.

Markdown is now the de-facto markup language used by developers around the world, and yet, the tools available to write a book using Markdown remain rudimentary.

I used Pandoc to write my first ebook. It wasn't a great developer experience. It makes converting markdown to pdf, epub, or mobi easier, but customizing is hard. Writing a bibliography with Pandoc is a cumbersome mix of Latex syntax and markdown variables. This is definitely not the tool I would have imagined to publish a book in 2020.

A few months later, I started using GatsbyJS to redesign my personal website, a JAMStack static file generator based on React and Markdown. And another few weeks later, I decided to work on my next ebook. That's when it hit me: an ebook is simply a progressive web app.

When you write an ebook, programming tools and principles can be applied to increase your productivity, like versioning and cascading style sheets. An EPUB file is an XHTML archive. Same with the Mobipocket file format used by Amazon. Generating HTML is a mandatory step of publishing an ebook, so it's not much different from coding a static website.

Modern JAMstack static website generators are the best tools we can use to write books in extended Markdown and publish them in different formats. The best part is that it's fast and free: you can easily build a rich HTML version of your book, convert it to PDF, EPUB and MOBI, and publish it on Netlify for everyone to read in less than a minute. How powerful is that?

Now, not everyone is a developer, and not everyone is familiar with the JAMstack, so I decided to write a dead-simple tutorial to allow you to do what I just mentioned using GatsbyJS. Ready? Let's go!

Writing a Book with JAMStack

Typewriters are almost gone. Software has become the norm, but when you take a look at the list of the most popular book writing softwares, you find expensive  What You See Is What You Get (WYSIWYG) editors like Scrivener or writeai or multi-purpose and cumbersome ones such as Google Docs, Microsoft Word, or Libre Office_._ WYSIWYM remains a techie's tool.

Latex has dominated academia for several years since its first release in 1985, even before Tim Berners-Lee's invention fo the World Wide Web in 1989. Latex wasn't designed for web consumption. In 2004, Markdown established itself as a more minimalistic approach to publishing HTML documents using a markup language.

Markdown is now the de-facto markup language used by developers around the world, and yet, the tools available to write a book using Markdown remain rudimentary.

I used Pandoc to write my first ebook. It wasn't a great developer experience. It makes converting markdown to pdf, epub, or mobi easier, but customizing is hard. Writing a bibliography with Pandoc is a cumbersome mix of Latex syntax and markdown variables. This is definitely not the tool I would have imagined to publish a book in 2020.

A few months later, I started using GatsbyJS to redesign my personal website, a JAMStack static file generator based on React and Markdown. And another few weeks later, I decided to work on my next ebook. That's when it hit me: an ebook is simply a progressive web app.

When you write an ebook, programming tools and principles can be applied to increase your productivity, like versioning and cascading style sheets. An EPUB file is an XHTML archive. Same with the Mobipocket file format used by Amazon. Generating HTML is a mandatory step of publishing an ebook, so it's not much different from coding a static website.

Modern JAMstack static website generators are the best tools we can use to write books in extended Markdown and publish them in different formats. The best part is that it's fast and free: you can easily build a rich HTML version of your book, convert it to PDF, EPUB and MOBI, and publish it on Netlify for everyone to read in less than a minute. How powerful is that?

Now, not everyone is a developer, and not everyone is familiar with JAMstack, so I decided to build my own visual website generator to allow anyone to do what I just mentioned using a dead-simple content management system. The beta is coming out soon, don't forget to subscribe to Bouquin!