DTTH Blog

Reader

Read the latest posts from DTTH Blog.

from Zumi's Blog

Welcome back to my travel log ;)

Day 3: Hamburg and Train to Copenhagen

Having stayed up until 3am writing the last part, I had a good late sleep, taking off at around 10. As it was not the best weather, we decided to just walk around until we got to a food place this time. During this walk, I did the usual: took some photos of the every day streets and places. As always, I don't try to find the best angles or scenery, just what a common pedestrian may see from a morning walk to work. That said, I must say it was pretty enjoyable.

We ended up getting some food at a native-looking Gasthaus. My friend and I was looking forward to trying Hamburg's traditional Labskaus, but we made a mistake reading the menu and thought they had a meat version of it (lol). In the end, I took it anyway, while my friend got something else. We were in the end pleasantly surprised that there was (a lot of) fish on the plate. Good dish, I must say. I liked the pickled beetroots too, despite the vegetable not being my favorite.

In the afternoon, the weather wasn't so great, so we decided to just stay in a café. With some drinks ordered, we passed the time, until our next trains onward to Copenhagen at around 5.40pm. I didn't pass on one last chance to get another Bratwurst, unfortunately I was too eager to munch the whole thing and didn't take any photos ;)

Train to Copenhagen

The “standard” way to go from Hamburg to Copenhagen is by an ICE directly connecting the two cities. But, being the broke student I am, we opted for a triplet of trains instead: a Deutsche Bahn train from Hamburg dropping us off at the border down of Flensburg; followed by two DSV (Denmark national train) trains towards Fredericia and Copenhagen respectively.

Nothing spectacular about this trip: the DB train was about 10 minutes late, but as all connections were ~25 minutes apart (a concsiderate decision from DSV, I assume!), there was no trouble. We arrived at Copenhagen at the exact expected arrival time, which was a bit over midnight.

My first impression of Denmark, which already started in Fredericia...

Everything, including toilets, are spotlessly clean! And...

7-Elevens?!

Copenhagen, first impressions

The main station gave me a Montreal station vibe, with a twist of red-and-white (and a lot of 7-Elevens)!

Our ho(s)tel is about three metro stations from Copenhagen main station, and so we found ourselves there in a few minutes. One note about the station: you can walk from the tracks to the underground metro passage, but the station signs insists on you getting out of the main station and head into the Metro tracks after crossing a road. Strange decision to be sure, but it could be a time-saving measure, as I expect it would take a bit less time than crossing the tunnels.

The metro was clean and spacious.

... and so are the streets. Look at that And there is a lot of 7-elevens.

We checked in quite late, but the reception was expecting us – and checking in can be done automatically, so there was no hassle. We got a small and cozy room, time to sleep!

Day 4: Copenhagen @ Christmas Eve

We woke up around the not-so-early 9am. It has crossed our minds that stores will close early on this day, so we were prepared. There is a Netto right close to our place, so we went there looking for some food and drinks.

Potatoes-in-a-jar

We walked down the main streets of Copenhagen, just acknowledging that museums are expected to be closed for both the 24th and 25th. I wanted a coffee (as a caffeine-holic), so we were walking towards some of the nearest cafés according to Google Maps. I was treated with a cozy atmosphere and lots of space on every street. This is a pedestrian haven, that's for sure.

Looking around, we found a lot of Japanese tourists coming to Copenhagen. We decided to also follow the Japanese tradition, and spent our first Danish lunch... in KFC ;)

Afterwards, more walking around. We visited the Kastellet and the streets surrounding, peeking at all 7-elevens we crossed along the route. Feeling slightly hungry, we ook an S-Bahn back to the main station, and turned our way back to the shopping street...

Extra photo dump

That Dinner Story

As you could've guessed from your own experience, most places close early on Christmas Eve. Including restaurants. This leaves a problem for tourists: what do we have for dinner? The answer is: restaurants run by foreigners themselves. Across the shopping street, we found the following options: – Mr. Pho, a Vietnamese-style restaurant serving Vietnamese popular dishes as the name suggests. Reviews do say they are run by Chinese residents though, and so our pride does not let us come in. – Burger King, I mean, yeah. – Wok Box, serving Chinese noodle soups and such. Was promising until we peeked inside and saw people slurping disgusting looking noodle for ~$20. – The two Shawarma places. – Instant Ramen from 7-Eleven.

I think we ended up pondering this question for hours, walking back and forth on the street (and between it and our hotel) about a dozen times before finally committing on one of the Shawarma places. And this is the result:

And it turned out, as always, ramen is better than anything.

And that's the end of the 4th day...

 
Read more...

from Zumi's Blog

Been a while since I was last here... ;D

It's also been a few months since I last took a vacation and went somewhere new for fun, so I decided to write some notes about it. Perhaps not the kind of thing you would post online, but eh, might as well share a bit of the fun (and the pain) with whoever that's following this blog (hehe)...

For context, I've been eyeing one of the more northern European country for a Christmas trip since I was here, but a plan has never materialized: I was in Korea, then in Japan, both times with the crew I had from Canada. This time around, the crew was not assembling, so I was back to eyeing the Scandanavians. Luckily, one of my other friends just started their Master's at the same place as mine, and they are also up for a trip. Let's go... to Copenhagen I thought. It's not too far, probably we can reach by train, I thought.

... and that's what we planned. Now, it was not initially this way: we thought of taking a cheap flight to Copenhagen before Christmas, then slowly taking our time through Germany with a few ICE trips afterwards. Looking at the air ticket prices though: – 97CHF for one way, no luggage (not even carry-on) tickets from Geneva to Copenhagen (~1.xx hours) – ~110CHF for the same trip but by trains and we stop at Hamburg as a middle point (~16 hours total)

At this point I think most people can see some reasons to take the trains, but would still think flight is the saner choice. Not me! I'm a bit of a train enthusiast, and I do enjoy traveling with my small luggage, so train it went. Our plan seems nice:

  1. train to Hamburg (~11 hours)
  2. train to Copenhagen (~6 hours)
  3. train to Berlin (~8 hours)
  4. train to home (~11 hours)

Fitting some dates in between every trip, and we got ourselves a multi-city vacation. Sweet! We booked all the train trips (~220CHF total, not bad!) as well as all the accomodations a week before. My friend was a bit of an Airbnb hater, so we went for some of the cheaper hotels, and to be frank, the price was about the same.

Eagerly waiting for the departure date...

Day 1: Depart for Hamburg!

We got on the first train at around 2pm, it was a Swiss (SBB) train. As expected, the train was on time to the minute. Despite not being able to book seats in advance on these regular cross-country trains (unless you are getting 1st class, which we students are never touching), there were enough vacancy for us to enjoy the ride all the way up until we arrived at Basel.

From then on, we changed to one of the German (DB) trains. Nothing too peculiar: I've been on DB ICE (high speed railway) before, and the trains they use are pretty similar. Our pre-booked seats seemed full, but that's alright, though it's a bit strange that it did not show the station we're getting off at (later on we found out from the announcements that they had to switch the actual train used on the route at the last minute and did not manage to move all the seat data over). Unlike the previous ride, this time we're sitting on the train for 4 hours straight, so we got comfortable and I dozed asleep for a little bit.

I prepared a lot of snacks for the trip: some grapes, a pack of Zweifel, a pack of chocolate waffles, apples, bananas, coffee and apfelschorle. It was a bit overkill, but who knows what your stomach will act during a 11 hour train trip? In the end, we managed to munch over all the fruits and drinks, leaving half of the waffles and never opened the Zweifel. During this first DB train, we were sitting with a German father and his kindergarten-aged daughter. Some thoughts of sharing snacks crossed my mind, but as soon as we saw them eating pre-cut fruits and vegetables together, we kept ourselves away from unhealthy snacks as well and just shared some grapes between us and the daughter. During the trip, she was also trying to talk to us, but her German is too much for my untrained ears, and so we could only smile awkwardly. Unfortunate!

Fulda

Now, no Deutsche Bahn trip would be complete without a bit of spätung. It was no simple crash either: the delay built up and up and up over time, adding a few minutes each station, before reaching a crescendo in Frankfurt. There, the delays seemed to have collapsed the train with another on the same track, causing it to be sitting behind for 15(!) minutes.

The train schedule we had

(this image was lying to you, the schedule itself was modified from Frankfurt on)

The DB app I had was very helpful to let us know that because of the delay we will be missing our connection, and offered a simple “choose an alternative” page. Well done DB app! I guess it happened often enough that having its own page is worth the hassle ;) With all that done, we were told to get off in Fulda, and spend about half an hour there. We will arrive around 40 minutes later than schedule, but that's alright: the hotel was already informed that we would be checking in quite late – another 40 minutes will make no difference.

Fulda is a small town, but has its own interesting Christmas market open. We didn't have time to actually buy something there, but the smell of glühwein was very inviting. Next time!

Bahnhofstrasse in Fulda Frog in Fulda Glühwein Place in Fulda

Arriving in Hamburg

The second train had some minor delays as well, but nothing painful: we arrived in Hamburg a bit past 0:40. Not too bad — the original schedule was 23:55~. Big city!

(I forgot to take a photo at the time, but it was dark and I was tired)

With little patience left for making HVV (the local transport system) accounts or dealing with ticket machines, we decided to just walk to the hotel — should take us around 12 minutes according to Google Maps. Of course, “misfortune never comes alone” (a Vietnamese idiom), we walked to the wrong branch of the hotel instead! And so another 20 minutes was spent...

We arrived at our correct hotel and got our room at around 1:30. At this time, all I could think of was to shower and sleep.

Day 2: Hamburg Day!

Did I tell you we booked a visit at freaking 9am in advance?

We passed some time on the train to Fulda looking at possible locations to visit. One of them was the Miniatur Wunderland, which (incorrectly, not sure if maliciously) informed us that all visits for Saturday from 10am to 5pm are fully booked. We had two choices: 9am, or night... And nights are for Christmas market, and so we took the 9am tickets, without knowing the incoming hit to our sleep budget...

... and so we were up 7:30am, totalling 5.5 hours in bed. The huge coffee bottle from home proved useful, as it kept me awake for the Miniatur Wunderland visit. We walked a few minutes from the hotel, getting a Day pass from the HVV app on the way. Interesting fact: HVV iOS app requires registration before getting a pass (yuck!) but works perfectly afterwards. The Android app, you can do it as a guest, but then it bought my friend the pass for the next day...

Miniatur Wunderland

Miniatur Wunderland is a permanent exhibition of miniature cities, built to scale with cars, buildings and people. And moving trains! They got various cities around the world: from Swiss cities, Italian places, Monaco race tracks, French farms, Hamburg itself... to the further Norwegian ports, the American landscape, Mexican desserts and even Antartica!

Did I tell you trains run through all of them?

I only took stills, so perhaps a YouTube clip would describe so much better...

Needless to say I was ecstatic about the models. They say it's a total of ~1.2 million man-hours to create the props, and I think it's so much worth it. For the price of 17EUR (student discount applied) I think you're going to have a great time. I highly recommend!

Also, not in the photos: you can see the operator room watching over the model simulation and the train network in real time! How cool is that!!

Another extra: one of the techniques that was used to “simulate” strong waves was to build a non-even surface, project a moving wave on top and synchronizing the ship's movement to the waves moving... Impressive!

Hamburg Kunsthalle

We went for the Kunsthalle (art museum) next, but since it's mostly artworks, I took no photos.

There were two buildings: one for the pre-modern art, and the other for modern exhibitions. Regardless, you get a ticket for both (8EUR on student discount) from the former building with the main entrance.

The first building organizes itself as a story moving from Realism art to Impressionistic, before touching a bit on Surrealism. I'm no art student, so definitely not here to comment. But I found the idea interesting: if Realism's motive was to capture the artist's rendering of nature in its full details, then an Impressionistic wants to do the same thing, but making the viewer filling in the details by drawing impressions instead of the details themselves. Of course, as viewers, we inject details drawing from our common experience, while keeping the “soul” of the image intact through the impressions. A bit of work makes you enjoy it more, I might say. Is Modernism the next step, where the “soul” is even more abstract, allowing us, the viewer, to inject our feelings and thoughts into our own personal rendering of the art? I pondered a little bit... and then moved to the next building.

The second building hosts a few unconnected exhibitions, each on its own floor. Confusing and abstract is the common theme, I would say, as the art ranges from gory pieces of flesh to... something resembling myself playing with MS Paint for the first time. Interestingly, each exhibition gives us a few paragraphs of context on the author and the collection: their upbringing, their interests and their message they hope to convey through the pieces. Still not so clear about whether I would “get” the exhibitions, but something dawned on me today: yes the pieces by themselves are confusing and abstract, but it seemed that none was meant to be viewed on its own. Taken together, one finds a bit of structure: a common through-line, reaching out from the first impressions of each piece, that hopefully one can take and feed back into our own rendering process. A bit like osu! mapping: one might find a common theme when you look at a piece both as individual patterns and as a whole, and use that as a recursive process...

Well, I felt a touch smarter leaving the museum than entering.

Food...

We stopped for food in-between. Highlight of the day was definitely the Currywurst I got in one of the Christmas markets:

Soft and tasteful bratwursts against the classic flavor of curry, just the perfect combination. Worth it every time.

Now, for the un-highlight. We decided to have a bit of Korean food for lunch, as our home(?)town lacks its presence. Settled on “Chingu”, we made our order... and found out immediately afterwards that it was not really authentic, and seems to be run by a group of Vietnamese cooks and staff.

I got myself a bowl of Kimchi Jjigae (one of my favorite Korean dishes), while my friend got themselves some Japchae with chicken, and we shared some fried chicken together.

The Jjigae had the ingredients, but lacked the complex taste of pork belly / kimchi rinse in its soup. The end product reminds you of its salt and MSG, which isn't bad by themselves, but they should be elevating the original flavors instead of replacing them... They just had some wacky sauce for the fried chicken though which turned us away from it as well. Not the best experience. But the yuzu tea was good.

There is another story involving ramen... but that's left for another time!

Internationales Maritimes Museum Hamburg

This one completely blew my mind, despite being a last-minute addition.

From the main station, we needed to take the S4 not-really-subway-but-is-much-underground line, and then walked a bit further. The sun already made most of its way by 4pm, so it was not the most positive walk we had. One interesting note: we walked from Tokiostrasse to Osakaallee, ending up with a bridge across a river (reminding myself a lot of Osaka) before getting to the museum itself.

We arrived just around 4.30pm, when the museum started its almost-closing-time ticket sale of 9EUR. Wonder why so expensive? You're about to find out. First thing we were instructed: please head to the 9th floor and make your way down as you visit the exhibitions. So we headed to the elevator. As the doors open up, we saw in front of our eyes...

Thousands of mini models of ships!!! What an opening. And it just keeps getting better:

And bigger...:

And even bigger...:

And more exotic:

Can you believe, even for captain hats, they manage to find hundreds???

At some point we jokingly said, “well, grab your Legos, you ain't gonna build any of these ships”. What do you see next?

(lego guy for scale in the 2nd photo, the ship is so big...)

But my most favorite thing is that this museum as well has a damn to-scale miniature model of the whole Hamburg port!?!?

(not sure if they have any connections to Miniatur Wunderland... also, damn, this is dream Cities Skylines material...)

My jaws were on the floor the whole time during this 1+ hour visit. I mean, this is no museum, this is just Hamburg flexing on all other harbor cities... To be frank, I think this will be the best maritime museum I will ever see in my life, and there is just zero competition.

Even the train station on the way home gets a model ship.

Home

... and that's all for the day. I'm very pleased! Hamburg's cultural side is perfect so far. We grabbed some Glühwein on our way home, and joined some remote friends for a gaming stream to spend the night. And then there's me spending hours writing this up. I'm happy to have had this experience, so it's probably worth it to jot it down while it's still fresh.

Until next time! Trains to Copenhagen tomorrow afternoon.

 
Read more...

from hung3a8

Peewee is a lightweight ORM for Python. Compared to SQLAlchemy, Peewee is much more simpler, and compared to Django ORM, it is much more explicit. Here are some Peewee tricks you can use to improve your code.

Ultilize DefferedForeignKeyField to avoid circular imports

Like any other ORM, Peewee has ForeignKeyField and ManyToManyField to work with relationships. Although these fields is really convenient, they are not very powerful in some cases.

The most common issue is circular imports. Take a look at the following code:

class User(Model):
    username = CharField()
    favorite_tweet = ForeignKeyField(Tweet, null=True)  # NameError!!

class Tweet(Model):
    message = TextField()
    user = ForeignKeyField(User, backref='tweets')

While it is possible to solve this issue by changing the order of the classes, it is not pratical and may cause further problems. To fix this, the simplest solution is to use a DeferredForeignKey field:

class User(Model):
    username = CharField()
    favorite_tweet = DeferredForeignKey('Tweet', null=True)  # OK

class Tweet(Model):
    message = TextField()
    user = ForeignKeyField(User, backref='tweets')

While DeferredForeignKey is powerful, it can't resolve itself if deferred field is defined after the target class, and need to be resolved manually, which is really annoying to do so:

class User(Model):
    username = CharField()
    favorite_tweet = DeferredForeignKey('Tweet', null=True)  # OK

class Tweet(Model):
    message = TextField()
    user = DeferredForeignKey(User, backref='tweets')

DeferredForeignKey.resolve(Tweet)  # Too annoying!

This can be solve easily by running this at the end of the file if every model in a single file, or after importing all the models into __init__.py :

from peewee import DeferredForeignKey, Model

from .model import User, Tweet

for model in Model.__subclasses__():
    DeferredForeignKey.resolve(model)

Model.__subclasses__() returns a list of all subclasses of Model, which is useful to get all the models that are imported before the call. By doing this, DeferredForeignKey will resolve all unresolved fields that are stored in DeferredForeignKey._unresolved. This is a bit hacky, but from now on, you can use DeferredForeignKey instead of ForeignKeyField to avoid circular imports.

Although ForeignKeyField circular import error can be fixed by using DeferredForeignKey, it is still challenging when it comes to ManyToManyField, as ManyToManyField can only pass the model class, and the only thing that can be declared deferred is the through model, not the target model. To fix this, I made a modification of ManyToManyField that works just like the DeferredForeignKey by storing unresolved fields in ManyToManyField._unresolved:

class ManyToManyField(peewee.MetaField):
    _unresolved = set()

    def __init__(self, rel_model_name, **kwargs):
        self.field_kwargs = kwargs
        self.rel_model_name = rel_model_name.lower()
        ManyToManyField._unresolved.add(self)

    __hash__ = object.__hash__

    def set_model(self, rel_model):
        field = peewee.ManyToManyField(rel_model, **self.field_kwargs)
        self.model._meta.add_field(self.name, field)

    @staticmethod
    def resolve(model_cls):
        unresolved = sorted(ManyToManyField._unresolved, key=operator.attrgetter('_order'))
        for dr in unresolved:
            if dr.rel_model_name == model_cls.__name__.lower():
                dr.set_model(model_cls)
                ManyToManyField._unresolved.discard(dr)

    def _create_through_model(self):
        """Use DeferredForeignKey to create the through model, as the rel_model is not resolved yet."""
        lhs = self.model.__name__.lower()
        rhs = self.rel_model_name.lower()
        tables = (lhs, rhs)

        class Meta:
            database = self.model._meta.database
            schema = self.model._meta.schema
            table_name = '%s_%s_through' % tables
            indexes = (
                ((lhs._meta.name, rhs._meta.name),
                 True),)

        params = {'on_delete': self._on_delete, 'on_update': self._on_update}
        attrs = {
            lhs._meta.name: DeferredForeignKey(lhs, **params),
            rhs._meta.name: DeferredForeignKey(rhs, **params),
            'Meta': Meta}

        klass_name = '%s%sThrough' % (lhs.__name__, rhs.__name__)
        return type(klass_name, (peewee.Model,), attrs)

The new ManyToManyField will store the relational model in _unresolved and resolve it when the target model is available by calling ManyToManyField.resolve(target_model). Furthermore, if the through model is not specified, it will be created automatically, but with DeferredForeignKey instead of ForeignKeyField like the original implementation from Peewee, as the relational model is not resolved yet.

ManyToManyField in autogenerated models from playhouse.reflection.generate_models

Peewee has a builtin extension called playhouse which is extremely handy with custom fields, model generator, etc. playhouse.reflection.generate_models is used to generate models from an existing database, and then return the models. For small projects and projects that have different codebases that using the same database but cannot use the same code for building database models (ie. my projects which has a bridge written in Python (using Peewee ORM) and the backend written in TypeScript (using TypeORM)). It also supports relationships, which is very useful as you now don't have to deal with handling the relationships by yourself.

But there are the way playhouse.reflection.generate_models handling Many-to-Many relationships. Currently, the only relationships that are supported in Peewee are OneToMany, ManyToOne, and OneToOne, which are only requires foreign keys to handle relations between objects. But when it comes to ManyToMany, the madness begins. As generate_models only generates ForeignKeyFIeld to handle relations between fields and tables, when it meets a Many-to-Many table, which, of course, only contains the two foreign keys that related to the two tables of the relationship. This can cause many problems, as you now have to work on Many-to-Many relationships in a much more complex way, perform manual querying, etc. Luckily, we can use this workarround to create ManyToManyField in those models, which can solve all of our problems above.

for name, model in models.items():
    attrs = model_attrs[name]
    if 'id' not in model._meta.fields:
        continue
    for fk in model._meta.backrefs.keys():
        if 'id' not in fk.model._meta.fields:
            right_model = [m for m in fk.model._meta.model_refs.keys() if m != model]
            if len(right_model) != 1:
                raise ValueError('the Many-to-many model %s contains more than 2 foreign keys' % fk.model)
            right_model = right_model[0]

            field = ManyToManyField(right_model, backref=pluralize(name), through_model=fk.model)

            attrs[pluralize(right_model._meta.name)] = field
        else:
            continue
    for attr, field in attrs.items():
        model._meta.add_field(attr, field)

This code is directly added to the end of playhouse.reflection.Introspector method generate_models, which is the one that we are actually using to generate models (playhouse.reflection.generate_models is a seperate function that is used to create a new Introspector object from the database, and then it generate the models for you). Let me explain the code a little bit:

  1. First, we exclude all the models without the primary key field in it, and we may imply its name to be id. After excluding those tables, we want to work on the rest of the models. We search for all back-references in the table, and then if a we encounter a table with no primary key field in it, we move to the next step. Do this all over again until we move through all models and all of its back-references.
  2. After you successfully found the table, it is now time to perform some magic. First, we have to search for the right-side model, which is the one related to the left-side table that we are working on. Most of auto-generated Many-to-Many tables from other ORMs contains only two foreign keys, one for the left-side and one for the right-side. We here imply that there should only exists only one model references that is different from our current left-side model.
  3. After having the right-side model, we create a new field with the same name as the right model, but in plural form (you can find and use a pluralize function on your choice, we will not discuss about it here). This new ManyToManyField should contains both backref and through_model options to ultilize its power, as we all have those data available. Once finish, go back to step 2 if there are more models available, or else move to step 4.
  4. After having all attributes stored, we add all the fields to the model Meta, using its method add_field(attribute_name, field). And there you go! A new model class with proper ManyToManyField for you to work with your precious projects!

The code above might not work with your codebases, and you might need to modify it a little bit. But still, the concept is pretty nice, and I hope that this can save you a bunch of time.

Ending

I hope these tricks will help you a lot with your code. If you have any questions, please feel free to contact me.

Happy coding!

 
Read more...

from dnx04

Typst is a brand-new typesetting system, the ultimate solution for document typesetting of the new age. A unified typesetting experience that combined the best of many worlds.

I believe that sooner or later, Typst will take over the reign of LaTeX in academia. Getting my hands dirty with Typst is a satisfying experience for me these days.

It works elegantly, and blazingly fast.

Everything works out of the box, you don't really have to learn much to start. Typst's document hierarchical system and scripting is so consistent and elegantly designed, being so easy to understand like Markdown.

95% of Typst is written using Rust. Typst's developers have done their job so well in performance optimization.

Incremental compilation system

While being powerful and feature-packed like LaTeX, Typst surprised me with a true WYSIWYG (what you see is what you get) experience. In the past, there are many LaTeX contenders tried to replicate Word's WYSIWYG experience such as LyX, BakomaTeX, but their implementations are so inferior.

In LaTeX, PDFs need to be recompiled after every changes, which is inefficient and resource intensive. With Typst, every change you make is placed on top of what exists before, thanks to comemo, Typst's solution for cached compilation. Each letter and object you type got reflected in your PDF instantly and fluently.

 
Read more...

from Zumi's Blog

#cooking #recipe #vietnamesefood

Just writing down what this video is doing for my own reference.

Preparing the meat

For 6 portions: ½ kg pork belly + ½ kg pork leg (for the extra softness and sweetness).

  • Cut them into bigger chunks, they'll be nice to bite anyway!
  • Marinate the meat:
    • Baby onions (4x): slice them thinly
    • Red chili pepper: cut two ways! Thinly slice one half, and slice the other half into big chunks.
    • Garlic: crushed.
    • Lime: squeeze half into the meat.
    • Take the chunked chili, garlic and salt and grind them. Put everything into the meat, mix them carefully.
  • Leave them in the fridge for half an hour.

For the eggs: boil for 12 minutes, then set aside.

Braising Juice

You need:

  • Coconut juice: 1 liter. Filter away the coconut: you only need the juice!
  • Water: add the same amount as the juice.

Wait for everything to boil!

Putting them together

  • Once the juice is boiled, add the meat in. Take out all the marinating ingredients from the boil!
  • Leave everything for two hours.
  • Add the eggs, a small amount (~1 teaspoon) of fish sauce and a bit of sugar for the eggs to golden up. Leave for another hour!

When braising, leave a big gap for the lid: it keeps the juice clear.

That's it, have fun :D

 
Read more...

from Zumi's Blog

... is the GTK's default gtk-font-name settings.

What is system-ui?

You can change them in .config/gtk-4.0/settings.ini (or .config/gtk-3.0/settings.ini) like this:

gtk-font-name=Font Name 10

Read more to see how I found it.

  1. Looking for "system-ui" firefox landed me on this Firefox issue tracking the feature.
  2. The issue has an implementation here that maps the system-ui family to the internal menu font-family.
  3. Looking at the gfx/thebes/gfxPlatformFontList.cpp file in the implementation I saw StyleSystemFont::Menu. How do we find it?
  4. ... Searchfox of course! That leads me to widget/gtk/nsLookAndFeel.cpp...
  5. And from there just look for mMenuFontName that got assigned...
  6. From here, we know it's something from GTK. A quick look at ArchWiki on GTK shows that you can change gtk-font-name... so we just try it and 🎉
 
Read more...

from Zumi's Blog

This is a web version of my last semester project report.

Introduction

In functional programming, it is common that programmers describe the possible side effects of a function in its signature. This can be expressed by returning a monad, or by expressing the capability to use the side effects as a parameter of the function. Scala uses the notion of contextual parameters (“Contextual Parameters, Aka Implicit Parameters” n.d.) for such capabilities, allowing the programmer to delegate to the compiler the tedious work of explicitly writing down an (obvious) instance of the capability when calling that function:

def sqrt(x: Double)(using {*} CanThrow[Exception]) =
    if x >= 0 then builtin.sqrt(x)
    else
        throw new Exception("sqrt of negative number")

However, often, the capabilities exist only as a proof of existence for static type-checking purposes, and is never interacted with during the execution of the program (other than being passed around). Hence, this style of programming creates a runtime overhead that we wish to avoid. Erased definitions allows the programmer to explicitly mark a definition/parameter as unused at runtime, guaranteeing that they are optimized away during compilation. The compiler should also guarantee that erased definitions are not referenced to at runtime, and the observable behaviour of the program does not change after erasure of these definitions. Scala already has erased definitions as an experimental feature (“Erased Definitions” n.d.), however the feature is incomplete and has some soundness issues (“Unsoundness Due to ‘Erased‘ · Issue #4060 · Lampepfl/Dotty” n.d.).

My semester project’s goal was to expand on the original implementation of erased definitions, adding needed functionalities as well as fixing soundness issues, giving it a firm ground to become an official Scala feature. Although the goal was not fully realized, my project has made the following contributions, which will be later described in the report:

  • Implemented support for mixed erased/non-erased parameters in functions (“Implement Individual Erased Parameters · Pull Request #16507 · Lampepfl/Dotty” n.d.). This fundamentally changes how erased functions are represented and treated in the compiler, which will be cover in 2, 3 and 4.
  • Extend support for erased classes, a syntax sugar for classes with only erased definitions. The work is preliminary, however future ideas on expansions as well as possible challenges are described in 5.
  • Fixed (“Ensure Erased Vals Pass CheckRealizable · Pull Request #16111 · Lampepfl/Dotty” n.d.) the aforementioned unsoundness bug (“Unsoundness Due to ‘Erased‘ · Issue #4060 · Lampepfl/Dotty” n.d.), which uses a pessimisstic type check on erased definitions to prevent bad bounds. We describe the details in 6.
  • Explored the interaction between erased definitions and capabilities, where the ability to create a capability at will is a problem. We present a way forward, by restricting the right-hand sides of the erased defintions to certain forms. The details are on 7.1.

Finally, the latter half of 7 discusses some future directions on erased defintions, which include a possible formalization of the feature.

Erased Definitions and “Erased” Functions

Erased Definitions is an experimental feature of Scala 3 (dotty), available behind a language feature import (“Erased Definitions” n.d.). When enabled, the current dotty compiler introduces two new concept, erased definitions and erased functions.

Erased definitions are val or def definitions that are guaranteed to be erased at runtime. They cannot be referenced from other code, except other erased defintions’ right-hand side and passed as erased parameters. Note that the erased definition’s right-hand side itself only type-checked by the compiler: in fact, right after type-checking, the right-hand side of the definition gets erased. This creates a problem if the right-hand side itself is a diverging computation, and the assigned type being uninhabited and/or not meant to be easily constructed. We discuss this problem in 6 and 7.1.

Erased functions, as of the start of the project, are functions that takes erased parameters: all parameters are erased at runtime, and the actual function at runtime does not take any (runtime) parameters.

// before erasure
def fn(erased x: Int, y: Int) = 5

erased val x = 5
val t = fn(x, x)
// after erasure
def fn() = 5

val t = fn()

Erased functions have a different signature to their non-erased counterpart, represented by a different method companion (Erased[Contextual][Implicit]Method). This disallows overriding an erased method with a non-erased method and vice-versa, avoiding a conflict in signatures post-erasure.

Individual Erased Parameters

The above representation of method types also has a severe limitation: all parameters have to be erased, or none are erased. A major part of the semester project is to overcome this limitation, by allowing the erased keyword on every parameter before every parameter, and only erasing them during erasure phase.

To support this, several parts of the compiler needs to be changed: – Adding support for erased flags in the parser. – Introduce a special annotation for erased parameters in method types, and change the rules in how we compare method types with erased parameters. – Change how we erase method parameters in erasure. This is rather simple, as this is just an extension of the current work: we only erased erased parameters, instead of all of them in an erased method.

We shall go into more details of the first two changes.

Parser changes

The main challenge with the parser is how to handle erased as a soft keyword, in the following cases:

def f1(using inline Int)
def f2(using inline: Int)

def f3(using erased x: Int)

def f4(using erased inline  Int)
def f5(using erased inline: Int)

Scala allows contextual parameters to be declared unnamed (so they are mainly used as contextual parameters of other function calls in its body). However, by the time the parser reaches the identifier following using keyword, it wants to know whether we are parsing the parameters named or unnamed (since the whole parameter list has to be of the same syntax).

The original parser before erased parameters only allowed inline as a parameter modifier and a soft keyword. Therefore, we can differentiate cases f1 and f2 by checking the lookahead when we see inline: If the lookahead is :, we know that inline is an identifier, otherwise it is a keyword. Furthermore, Scala does not allow implicit inline parameters without an identifier (possibly because of this parsing ambiguity), so f3 is not valid Scala with the inline keyword.

A rather simple approach works here: we eagerly parse all the soft modifiers of the first parameters (erased for f3 and f4, and both erased and inline for f5); figure out if the following syntax is named or unnamed, parse them, and finally apply the modifiers back to the first parameter. To figure out whether the current erased or inline is a soft modifier or not, we simply look at the lookahead, and assume that they are if the lookahead is not :.

The same eager modifier parsing approach is used when parsing lambdas with erased parameters, whereas previously no modifiers are allowed in lambda parameters.

Erased Method Types, Erased Method Companions

We implement method types with erased parameters as normal method types: there are no distinction of MethodCompanions between the former and latter. Erased parameter are instead annotated with an internal annotation, ErasedParam. When comparing method types, we require that the parameters match this annotation: for two method types to have a subtype relationship, each pair of parameters in their respective positions have to both have the annotation, or both not have the annotation.

That is, given

m1 = MethodType([Int, Int @ErasedParam], Int) // (Int, erased Int) => Int
m2 = MethodType([Int, Any @ErasedParam], Int) // (Int, erased Any) => Int
m3 = MethodType([Int, Int], Int)              // (Int, Int) => Int
m4 = MethodType([Int @ErasedParam, Int], Int) // (erased Int, Int) => Int
m5 = MethodType([Int], Int)                   // (Int) => Int

We consider m2 <  : m1 by usual subtyping rules. However, m3, m4 and m5 does not have any subtyping relationship with any other types listed. Interestingly, although m1, m2 and m4 all become m5 after erasure, during typing we do not consider them equal nor having any subtyping relationship. This is to avoid “casting away” the erased parameter, defeating the purpose of having them in the first place.

The approach requires no change to the representation of the method types, nor the TASTy output format. In fact, the special logic for the Erased[Contextual][Implicit]Method companions are now unneeded and removed, making method types handling less cumbersome. It does, however, require some changes in the type comparer to treat the annotation differently while doing method parameter comparisons. This messes with the usual logic of annotations, where they do not matter during comparison; hopefully, since this is a tightened restriction, the implementation shall not introduce unsoundness here.

Another approach was also considered, which does not require this special annotation treatment: adopt the Erased[Contextual][Implicit]Method companions to also carry a boolean flag list indicating erased/non-erased parameters. Method type comparison includes MethodCompanion comparison, and method types with differing companions are naturally considered not comparable. However, this further complicates the already exponentially growing set of method companions (with mixed Impure, Contextual, Implicit); and introduces a non-backward-compatible change to the TASTy representation of the method type. This is considered both difficult to maintain and changing TASTy format is undesirable, so the former approach was preferred.

Function Types with Erased Parameters

Function traits are how Scala represent lambdas/closures in the type system. Typically, a function of arity N with parameter types T1, …, TN and returning type R is assigned the type FunctionN[T1, …, TN, R] with its syntax sugar (T1, …, TN) =>R. The trait itself is defined as

trait FunctionN[-T1, ..., -TN, +R] {
    def apply(x1: T1, ..., xN: TN): R
}

Similarly, contextual function types are represented as ContextualFunctionN, with a similar trait definition (with using added to the parameter list).

Function traits are important during typing and during erasure (into JVM types):

  • During typing, we want to distinguish between different function types (Function1[Int, Int] and Function2[Int, Int, Int] shoud not have a subtyping relation, nor do Function1[Int, Int] and ContextualFunction1[Int, Int]; however, Function1[Any, Int]  < : Function1[Int, Int]).
  • During erasure, we want to convert function types into the JVM equivalent. Furthermore, Scala-specific function types like ContextualFunctions do not have a JVM equivalent, they will need to be converted into normal Functions.

Previously, a function can only have the entire parameter list erased, and hence can be represented by a simple set of traits ErasedFunctionN:

trait ErasedFunctionN[-T1, ..., -TN, +R] {
    def apply(erased x1: T1, ..., erased xN: TN): R
}

The ErasedFunctionN[T1, …, TN, R] traits are then transformed into Function0[R] during erasure. This is a simple representation, however we cannot simply extend this to allow individual erased parameters.

An initial idea for extension was to simply do the same thing as method types, and annotate the parameter types T1, …, TN with ErasedParam annotations. This, however, does not work when adding the same treatment of annotations during ErasedFunctions comparison. This is due to the fact that ErasedFunctionN are just normal traits, which can be aliased or extended:

trait EraseSecondParam[T1, T2, R]
    extends ErasedFunction2[T2, erased T1, R] { }
// is EraseSecondParam[Int, Int, Int] =:= ErasedFunction2[Int, erased Int, Int]?

Note that in the example, EraseSecondParam annotates parameters for ErasedFunction2, but it can shuffle the parameters (or use any type expressions) freely. This makes type-parameter annotation matching impossible.

Erased function types with refined ErasedFunction trait

The final approach involves defining an empty trait, ErasedFunction, and defining a function type with erased parameters as a refinement of this trait:

// (Int, erased Int) => Int
type IeII = ErasedFunction {
    def apply(x1: Int, erased x2: Int): Int
}

With this approach, we simply delegate both typing and erasure to the underlying MethodType (with erased parameters) of the refined apply method:

  • During type comparison, the type comparer naturally compares the apply refined methods, which performs the comparison of the underlying MethodTypes, so the logic implemented in 3 kicks in automatically.
  • During erasure, we erase all refined types of which the base type is derived from ErasedFunction (to support aliased/extended traits of it). Erasure looks at the refined apply method type, and erases the value into a FunctionM value, where M is the number of non-erased parameters.

From this delegation we also get contextual/implicit function types with erased parameters for free.

The implementation mostly focuses on adding support for other phases of the compiler to recognize refined ErasedFunction types as function types. Luckily, this is very similar to the implementation of polymorphic function types (“Polymorphic Function Types” n.d.). However, since the compiler treat polymorphic function types themselves as a separate function type, support for polymorphic functions with erased parameters does not come “for free”, and hence are left out of the scope of the project.

Lambdas with Erased Parameters

Just like methods and function types, lambdas themselves can declare a parameter as erased:

type T = (Int, erased Int) => Int // syntax sugar for refined ErasedFunction trait

val v1    = (x: Int, erased y: Int) => x + 1
val v2: T = (x, erased y) => x + 1
val v3: T = (x, y) => x + 1 // error: expected T, got (Int, Int) => Int

Of note is the special case for v3, where we know that y has to be erased, but the compiler rejects the definition anyway. This is intentional: it is hard to tell whether a parameter is inferred as erased (especially with the introduction of erased classes in 5), and runtime references to the parameters can be surprising. However, if we explicitly mark y as erased, the compiler will accept the defintion, as in v2.

Adding support for a soft keyword in lambda parameter list, however, is another challenge for the parser. Scala has support for placeholder syntax for lambdas (“Placeholder Syntax for Anonymous Functions Scala Language Reference” n.d.), but this introduces an ambiguity in the parser:

val v1 = (_ + 1, _ + 2)           // (Int, Int) => (Int, Int)
val v2 = (x, y) => (x + 1, y + 2) // (Int, Int) => (Int, Int)

val v3 = (x, erased y)      // error: invalid syntax
val v4 = (x, erased y) => x // ok

Note that while parsing the right-hand side of v1 and v2, we cannot distinguish between a tuple expression and a parameter list until we have parsed the entire “tuple” and see the following arrow. The Scala parser overcomes this ambiguity by parsing the parameter list/tuple as a tuple itself, and re-interpret the tuple as a parameter list if it is followed by an arrow. However, this is a problem for erased, since parameter modifiers in a tuple is not valid syntax.

To mitigate this, we allow tuples to have a temporarily invalid ValDef as a tuple value (with modifiers). During parsing, if the tuple turns out to be an actual tuple (and not a parameter list), we check for ValDefs and reject them.

Erased Classes

Erased classes are a syntax sugar for marking every defintion/parameter of the defined class as erased automatically. Classes that extend an erased class has to be erased as well.

erased class A(x: Int)
erased class B extends A(5)
val a = new A(5) // a is erased
def f(x: A, y: Int) = y + 1 // x is an erased parameter
val v = (x: A, y: Int) => y + 1 // (erased x: A, y: Int) => Int

def f1[T <: A](x: T) = 5 // x is NOT erased

During typing, we mark all definitions and parameters whose types declared or inferred to be a concrete erased class as erased. Erased classes are otherwise treated exactly as normal classes. This means that, even if by subtyping rules we can deduce that a type parameter is of an erased class, we don’t infer erased from type parameters (as in f1).

These limitations mean that we are not creating a separate hiararchy of “erased” types, similar to phantom types (Stucki, Biboudis, and Odersky 2017). Instead, erased classes are purely syntactic sugar, and does not change the type system in any way.

Soundness of Erasure

Scala supports path-dependent types, where one type can refer to another type defined as a member of another variable/path, and traits can give a lower and upper bound of its values’ type members. Any bounds can be defined on traits, but a concrete type has to be given when constructing a value. Hence, the existence of a value implies a subtyping relationship between the lower and upper bound. The call-by-value semantics of Scala ensures that an expression with impossible bounds cannot reduce successfully to a value (it can throw an exception, or loop forever). However, this is not ensured for erased values, whose runtime semantics do not exist.

trait A {
    type T >: Any
}
def upcast(erased x: A, y: Any): x.T = y
erased val v: A { type T <: Nothing } = ???
def coerce(x: Any): Int = upcast(v, x)

The above program type-checks if v and the parameter x are not erased, but the program would crash once v’s right-hand side is executed. However, we cannot allow the program to type-check if these defintions are erased, since the right hand side of v is then never executed (in fact, v does not exist at runtime, and we have an unchecked cast in coerce).

Note that at the point of v’s definition, we know that v’s type is uninhabited: there is no type T that is a subtype of Nothing and a supertype of Any. Normally, any attempt to use the path-dependent type v.T are subject to uninhabited type checks (i.e. a realizability check), but it is fine to pass it to a function. This gives us an approach to check for this unsoundness: at the point of passing an expression as an erased parameter, we perform a realizability check on that expression’s type. This is implemented in (“Ensure Erased Vals Pass CheckRealizable · Pull Request #16111 · Lampepfl/Dotty” n.d.).

There are two notable points about this implementation:

  • The realizability check itself is not a perfect check, but rather a pessimisstic check: types passing the check are guaranteed to be inhabited, but not vice versa.
  • We only perform checks if the function being called uses the path-dependent type in its parameters or result type. This seems to be enough to prevent the unsoundness bug, but it is future work to formally prove that this is a sufficient condition.

Future Work

Interaction with Capabilities

Right now, safer exceptions (“CanThrow Capabilities” n.d.) uses erased CanThrow[E] capabilities to control the types of exceptions allowed to be thrown. The CanThrow instances themselves are synthesized with the right-hand side scala.compiletime. erasedValue[T], which is a special alias to ???, the “undefined” value. Since the erased values are erased at compile-time, this does not cause a crash at runtime.

However, this also means that the programmer themself can summon their own instance of CanThrow with scala.compiletime.erasedValue[T], or another similar expression (such as an endless loop). This completely defeats the purpose of having CanThrow instances as capability tokens. We are in a hard position: we wish to prove that erased defintions are terminating, yet it is an impossible task.

One of the approach that is proposed is to limit the erased expressions themselves. After macro expansions and inlining, erased expressions should be either a constant, or a constructor call. Note that no function calls are allowed; however, we don’t attempt to check that constructors are terminating: it should be the programmer’s responsibility that capability tokens are successfully created, erased or not.

The realization of this approach, including the restriction implementation and a new approach to synthesized erased values, is early work in progress.

Formalization

A possible formalization of erased definitions and parameters, including a proof of semantics-preserving erasure, is highly desired for future work. With the restriction in 7.1, it might also be possible to prove that erased values are side-effect-free.

Idris 2 has compile-time only parameters, as part of its quantitative type theory (Brady 2021). Its explicit annotation and checking for usage counts of the parameters within the function body help make compile-time erasure semantics of dependent type values obvious and easily understandable. It will be interesting to look into the underlying formalisms of quantitative type theory, to apply to our own future formalization.

Dafny has ghost defintions (“Dafny Documentation” n.d.), which are defined to be specification-only variables. Due to a strict distinction between pure and impure functions, pure functions can take both ghost and runtime values and return either; while impure functions are the same as Scala’s methods with erased parameters.

References

 
Read more...