Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Apple’s use of Swift in iOS 12 (timac.org)
184 points by glhaynes on Sept 28, 2018 | hide | past | favorite | 139 comments


I have been writing code for over 10 years and I can comfortably say that Swift is one of the best languages today than many other languages at a much later stage in their life (hey there Java!). I'm not talking about the standard lib, I'm talking primarily about the language and how well one can express their ideas with it without shooting oneself in the foot.

The only reason I suspect that Apple is not all over Swift is that many of their programmers are the type of coders who like falling back to C when they feel they need a performance boost. Perhaps they like the message-like nature of ObjC. Dunno. I can say that I've been able to get apps to 0 fixable crashes with Swift (some crashes are system level), not something I've ever done with ObjC, let alone Java for Android.


It was a couple years ago, but I spoke with one of their engineering managers about Swift adoption within the company. His response was that it was being used for new apps/projects, but not so much for existing codebases.

Having worked on a large Obj-C codebase that began adopting Swift, there are definitely some headaches involved in Obj-C/Swift interop, and you often aren't getting many benefits of Swift unless your new code has few to no dependencies on / is not depended on by Obj-C code, unless you can refactor it into Swift.

Given how quickly it seems they've been forced to ship, I can understand not wanting to deal with these issues, or not having the time to address them.


There are three advantages that I saw immediately upon switching to Swift:

- It was easier to hire developers

- Code was less prone to null pointer exceptions

- There were fewer assumptions when deserializing JSON into a struct/object


We moved a very large iOS app over to Swift, and the first two benefits are really huge. I know people have opinions on these things, and both sides are right, but the really strong typing and enforced nullability have been hugely important to getting junior engineers contributing to the codebase.

Not saying that it's an unsolvable problem elsewhere, it's just an anecdotal benefit we got


Android I've done for embedded apps, by using immutable data flows only, Optionals everywhere, and very careful debugging for memory leaks. Go to a screen, force gc, go back, force gc, memory before and after should be the same, we'd find leaks of as little as 4kb this way, and it was every bit as excruciating as it sounds.

It's weird using Swift and realizing how much of that trouble we went through is avoided just by not using a GC'd language


I worked in Swift for just over a year. Love the language. But unfortunately you can still get reference cycles that leak memory. And they can be basically unsolvable with a large enough app. Merely avoiding GC doesn't fix all your problems!


Isn't reference counting just a different style of garbage collection? I see lots of articles saying that swift both is and isn't garbage collected, and I'm curious what the proper answer is here.


Not quite the same as garbage collection. When an object has a pointer null'ed, that object's reference count is reduced by 1. When the reference count reaches 0, that object is remove from memory. You could still cause memory leaks like so:

```

struct Node {

  var next: Node?

  var prev: Node?
}

var a = Node(next: nil, prev: nil)

let b = Node(next: nil, prev: a)

a.next = b

```

These two objects will persist in memory for the entire life of the app even when all reference to a and b are gone, except for their references to one another.


Yes, but the overhead is much smaller compared to other techniques, and it's simpler to predict runtime behavior because there is no nondeterminism when ignoring the runtime which would otherwise be introduced by collector behavior.


with all those differences, reference counting is not garbage collecting, it's like calling free in c garbage collecting, which is incorrect.


https://en.m.wikipedia.org/wiki/Garbage_collection_(computer...

Reference counting is listed as a strategy to achieve garbage collection. Broadly speaking, garbage collection is the process of automatically managing the lifecycles of objects in memory, which reference counting certainly achieves.


I've gotten the impression (and that's all it is, could be totally wrong) that Apple's a little hesitant to go all in until ABI stability arrives in Swift 5. Even after that, I doubt they'll do much rewriting of old stuff; but I bet we'll see most new stuff being in Swift then.


> many of their programmers are the type of coders who like falling back to C when they feel they need a performance boost

Question for anyone who knows: what percentage of Apple's application-software programmers double as system-software/kernel programmers? Because I expect that'd color their preferences for a daily-driver language quite a bit.


A perfectly good rooster, considering that Swift only became usable at v3.0, two years ago.

I've developed since for version 1, small utilities, and even for those projects, it was not enough.

Version 4 and it's my favorite language to develop for.


I can't wait for Swift 5 when we finally get ABI stability, that's the last major issue that's holding Swift back.


That'll definitely be nice, for people already using the language, but I don't know of anyone actually held back by that. I'm more concerned by issues like lack of a good concurrency story, and lack of a good cross-platform story (differing behavior on Darwin/Linux, no Windows support).


May I ask what’s ABI Stability?


ABI stands for Application Binary Interface. It's basically how compiled code interfaces with each other (e.g. how functions are called and values are returned).

In Swift, this is currently not stable, meaning it changes with each version. The result is that you can't use a library compiled with version X of Swift with version Y of Swift. From version 5 onward the ABI will be compatible between version 5 and versions >5.


Totally agree with this. I tried v1 and then v2 and nearly never came back because all my code broke going 1-2 and it was just generally awful. V4 is lovely though. Some people with common sense must have joined the party


> Some people with common sense must have joined the party

More likely the language just matured. I'm glad they were not afraid to iterate as it is hard to get everything right on the first try and being stuck with a suboptimal design for the next 20 years would have been much worse.


It's funny to bring this comparison up (since Graydon is now on the Swift team), but Rust _also_ iterated like mad and took a while to mature — however, they waited until some degree of maturity before they called it 1.0, and have since been incredibly cautious about breaking changes. And they did this all of this iterating while building Servo in-house at Mozilla on the unstable version.

This is, in many ways, the diametric opposite of what this thread is discussing: pushing out nominally "stable" versions to the outside world, breaking things, but not committing internally.


I agree they probably shouldn't called Swift a 1.0 so soon, but I ascribe that to Apple management not allowing them to come out of the gate with something labelled 0.x


This.

Apple prides itself on "not shipping beta products", which kind of BS.


this only applies to products that normal users interact with - unfortunately it does not apply to anything they do that is targeted to developers.


Agree with this comment. If you’ve watched the community work at this language it’s been a very clear and well documented process. I’ve learned a lot just by watching Swift evolve through proposals.


For me, to make swift a total contender it need good support for non-apple platforms:

https://www.reddit.com/r/swift/comments/8zb9y1/state_of_swif...

For that, I'm exploring with rust now, but certainly is far harder than use swift.

I also look at nim, yet I think is too inmature for use in ios/android?


The Kotlin team is making a real effort to make the language truly multiplatform. It's still not production ready in my opinion for anything you wouldn't use Java for but they keep improving their JavaScript and native support. The iOS app for their official conference is written in Kotlin for example


I don't think it's too immature. People have successfully targeted iOS/Android with games in Nim.


I judge from what little I can find. Exist a good reference in how do it?

Even if a keep rust I think nim cover a nice spot to add to the toolbet.


Given the maturity of iOS it's not unexpected to see the uptake of Swift so slow. Though it's lacking a "total binaries" number.

Swift is a great language to work in.


I have to comment on this. Swift is one of those languages that make me look forward to opening up Xcode in order to start writing code. It's just such a fantastic language, and it's hard to explain why until you've actually started writing code in it.


Honestly curious: What makes it such a fantastic language?


I think optionals are my favorite feature. That may be due to me coming from a Java background (which also has optionals since 8, but I feel it's rather clumsy compared to Swift and other modern native languages).

By doing optional chaining you can do something like: if let name = result.person?.firstName { ... } which only evaluates firstName if person is not nil, and then stores the result in name which is immutable.

It's easier to avoid mutable code in Swift than in many other languages. For this reason I would love to start working with Swift on the backend at some point in the future.

I like extensions, being able to extend any class I want by just typing it into any file I want. This helps with iterative development and trying something out.

Speaking of iteratively, if I want to try something quick, I can also drop down to a shell and start the Swift REPL to experiment.

I like how built-in functions like zip and mapValues are available, so it in some ways has a Pythonic feel. I like the syntax of building strings by just taking a variable name and going let s = "Hi there \(name)!".

There are lots of other things I enjoy, such as subscripts (see why here: https://docs.swift.org/swift-book/LanguageGuide/Subscripts.h...) and

Some things I would like to see improved is Xcode itself and also Swift's capabilities in the backend.


Thanks for the answer!

I've been waiting impatiently for a better Swift backend ecosystem, because I'd love to use it for web development as well.


I’m having a great time in Vapor 3.


Vapor is pretty amazing. Super fast and pretty fun to work with; also a very friendly community. Ray Wenderlich’s team as well as Paul Hudson also have some great books on server-side Swift for those looking to learn this stuff.


As a mainly C# developer, it seems we have a lot of features in common!


yeah, string interpolation is such a killer feature. I love using it in Python 3.6.


not parent, but features of Swift that I _really_ miss when working in other languages are:

- rich enums - (https://appventure.me/2015/10/17/advanced-practical-enum-exa...) Much of my code is now encapsulated in the enum, where it logically belongs, instead of distributed throughout some class that consumes the enum.

- highly expressive pattern matching - (https://appventure.me/2015/08/20/swift-pattern-matching-in-d...) Swift's pattern matching is especially powerful with a switch statement. I frequently bundle multiple values into a tuple and switch across them, which allows me to flatten many if-else pyramids, and makes control flow more obvious up front.

- Optional - (https://developer.apple.com/documentation/swift/optional)

- Protocol extensions - (https://docs.swift.org/swift-book/LanguageGuide/Extensions.h...) Other languages use things like abstract classes or traits to implement this behavior, but I find protocols to be much more composable.

- The standard library - Swift's standard library is really well thought out. Many standard types have been built on top of highly reusable (and easy to reason about) types or interfaces that provide enormous utility when I adopt them in my custom types. Examples of this are: Codable (https://developer.apple.com/documentation/foundation/archive...), Equatable and Hashable (https://developer.apple.com/documentation/swift/adopting_com...)

There's more than that, and Swift is definitely not without it's shortcomings, but those features have fundamentally changed how I reason about code. I frequently find myself wishing I could reach for similar tools in other languages.


I agree with that list and I’d also add value types in structs. Getting rid of all the shared state that objects and reference types add has been a big win for stability and correctness in my apps. I’m getting out of iOS development in favor of the web but I’ll miss swift.


To add to the other good answers here: I love its succinctness. The syntax itself is succinct (but still extremely readable) and the pervasive availability of functional-style methods (map/filter/reduce/etc) make it easier to keep lots of code in your head and on your screen at one time, making it much easier to understand complex code.


Not OP, but if you look at Rust, take out the borrow checker and you basically have Swift. It's a nice combination of modern OO and functional language constructs that make it easy to write elegant abstractions.


Python like


Its funny to hear people say that, when I absolutely HATE writing in it. Give me Objective-C any day.


I assume we'll see it being more used by Apple at the system level a year or two after they got the ABI stable. Somewhere around iOS 14 or 15 maybe.


While I think dyld is probably short for "dynamic load", I initially read it as 'Dylan Language Deamon" when I read the phrase 'shared dyld cache'.

I know Apple was heavily involved in the Dylan language several years ago. Does anyone know if Swift was influenced by Dylan ?


dynamic libraries on macOS are dylibs. There is a debate whether 'ld' is short for load/loader, or link editor. So you can either read it as the dynamic library load tool or the dynamic link editor.

A shared dyld cache would be a cache of metadata used for linking, but not sure if that is for link-time or runtime.


66 binaries out of how many in total?


Has there been widespread adoption (or migration to Swift) in "large" companies for their flagship products outside apple?


What about macOS?


I wonder if there is a way to correlate the number of bug / issues related to binaries with Swift. It’d be interesting to see if Swift correlates positively with bugs/crashes.


Actually I disagree. Since I started writing new apps exclusively in Swift, my apps have crashed exactly zero times according to Fabric/Crashlytics. I adopted day one, never looked back. It is an incredibly safe language when used correctly (never force unwrap for example, ever). While previously my Objective-C apps did have numerous crashes in them. I took on an Objective-C contract recently, and it was a nightmare.

Of course, mixing with Objective-C may actually be the cause of issues or crashes? Who knows.


As a counter-example, the macOS Dock was rewritten in Swift in macOS 10.12, and Mission Control was super buggy for me then. I'm not blaming this on Swift, most rewrites are buggy at first. And it never outright crashed, but getting stuck in inconsistent states is not much better.

In fact, I would argue that this new trend of defensive programming in Swift will make software worse in the long run. We had a tradition of sending crash reports back to developers. If everyone now starts their methods with `guard let param = param else { return }`, software will silently fail on end user devices, and everything will look fine in Crashlytics/App Store Connect.

I'm not saying that this is what your apps are doing. But I know that Apple bragged about their record low in crash numbers at a time when I ran into different glitches across all of their apps every single day. It's a flawed metric.


True, I think Crashlytics still doesn't work very well for logging unusual but not crashing program situations. I would like to have an elegant solution for that. I always throw assertionErrors in case something weird happens handling an optional.

However if you adopt different patterns with Swift you can exclude a lot of optionals, for example if you give every View a State enum with associated values you can avoid quite a lot of optionals:

    class PersonView: UIView {
        enum State {
            case empty
            case loading(personId: String)
            case loaded(person: Person)
        }

        var state: State = .empty {
            didSet {
                switch state {
                /* handle all different states */
                }
            }
        }
    }
Every time you switch state you need to give the right associated value and every time you are in this state this value is guaranteed to be there where before you would have an optional personId and an optional person.

And it's applicable almost everywhere. I barely use optionals anymore unless I really can't replace them with an emum.


> I think Crashlytics still doesn't work very well for logging unusual but not crashing program situations.

Have you tried

    Crashlytics.sharedInstance().recordError(error)
I use it to log errors on API calls in my projects. It comes in handy when I'm using third-party services and they decide to break something on their end.


The idea of an option type (Optional in Swift) is to use it only for a variables truly assume a "none" value at some point. With the guarantees of such a type it is easier to reason about the correctness of a program - compared to Objective-C where a pointer anywhere may be null at any time. Optional shouldn't be used for most variables and parameters. `guard let param = param else { return }` should be something pretty rare. I would mostly expect to see that for weak self variables in completion blocks.

Also, take into account that in Objective-C you can send any message to a nil pointer, and the result will be nil or 0 (for scalar types), as defined by the language standard.


> Optional shouldn't be used for most variables and parameters.

I'd agree if Swift was a general-purpose C++ replacement, but most developers use Swift to write Cocoa apps. viewController.navigationController is optional, view.superview is optional, label.text is optional; all of Apple's frameworks are built on mutable state where almost everything can be or become nil.

(We can make sure that most variables and parameters aren't optional, but that only pushes the problem to other lines of code.)

If we know that we've loaded a view controller from a storyboard, is `self.storyboard!.foo` really a code smell? What else should we do? The type system doesn't let us express what we know/assume about the situation, unless we completely sidestep Apple's controller infrastructure and write our own thing.

> Also, take into account that in Objective-C you can send any message to a nil pointer, and the result will be nil or 0 (for scalar types), as defined by the language standard.

Right, I am not saying that Objective-C handled this any better. But it boggles my mind that we have Swift's complicated type system now, and use it to rebuild an implicit source of errors from Objective-C.


Replying to your specific examples, I would say `storyboard!.foo` and `viewController.navigationController!` (when you know the view controller is in a navigation controller) is the correct thing to do. And if your assumptions are false that is going cause a trap at runtime, and you will get your crash report.

All the existing Apple frameworks where originally designed for Objective-C, and if redesigned with Swifts type system they could have a much safer API. But what are you suggesting Apple should do? Throwing thousands of man years of their own and third party developers code away, and ask everyone to start from a clean state? I prefer the current approach where, while the Swift language and tooling is maturing, Swift acts as an incremental improvement over Objective-C.

I should also add that if you avoid putting too much of your code in your views (networking, data storage ...) then you can put those components in embedded frameworks that can have a nice safe Swift API.


> But what are you suggesting Apple should do?

Apple could have spent less time on their new programming language if they hadn't insisted on reverting every single design decision in Objective-C: the way mutability and constness are handled, NSObject as a root class, naming conventions, the string class, creating their own package manager, etc. There's so much pointless bridging going on.

That would have given Apple plenty of resources to iterate on their current UI frameworks. If the IB/storyboard infrastructure leads to nil-heavy and stringly-typed code, why not push a first-party UI DSL? Why do we even need a third-party platform like Crashlytics to debug errors? Why is it so hard to write UI tests and have them run on a CI? Has IB_DESIGNABLE started working reliably at some point? Why is basic stuff like this not a blocker? [1]

And yes, at some point I think UIKit and AppKit should be scrapped in favor of a new framework (with a slow migration path, not a clean cut). I was hoping for UXKit to be just that, but instead we got the monstrosity that is Marzipan. Apple's focus is on the language, when it should be on the frameworks.

[1] https://bugs.swift.org/plugins/servlet/mobile#issue/SR-6795


>Apple could have spent less time on their new programming language if they hadn't insisted on reverting every single design decision in Objective-C: the way mutability and constness are handled, NSObject as a root class, naming conventions, the string class, creating their own package manager, etc. There's so much pointless bridging going on.

But every one of these is arguably a good decision. Swift.String has a significantly better interface. immutable types are a huge win. It doesn't make sense to have a root class when you have non-class value types. You have to change the naming conventions if you change the calling syntax, which was a swift goal.

You also can't take LLVM gurus working on a new language and re-task them to take on UI framework feature adds without a lot of friction.

(BTW, my understanding was that NSObject isn't the only root)


I disagree. I think all their decisions were bad, which makes Swift a bad language. They focused on the wrong thing.


I did notice that a lot of Objective-C's uncertainties were paved over when the years passed. Most libraries feel a lot more Swift-y but there are some problems and it's a pity that Storyboards are one of them. I hope they'll revise the system some day.


The point stands, however, that just because developers are forced to handle the `nil` somehow, doesn't mean they're handling it in a sensible way that makes for consistent UX. Crashing sucks from a user perspective, obviously data corruption is even worse, but just throwing up your hands and returning early from a view controller method doesn't really do anything for the user either.

I generally find it useful to have the concept of Optional, making me think about "could this thing be missing?" explicitly. But I've started to wonder if it is, rather than "preventing an entire class of bugs", actually just making them pop up elsewhere when Optional values start brushing up against that top level of user-visible stuff.

Maybe we (I) just need to get better at thinking about missing data conditions at the product design stage, or more robust defaults.


> when Optional values start brushing up against that top level of user-visible stuff

Perhaps you are under-using sum types - or enums with associated values as they are called in Swift. Difficult to get into details in this format, but for example, instead of:

`enum State { case connected(Connection, SomeOtherStateRelatedToConnected) case disconnected }

let state: State `

You have: ` var connection: Connection? var someOtherState: SomeOtherStateRelatedToConnected? `


j


> In fact, I would argue that this new trend of defensive programming in Swift

Swift is not defensive, Swift can be defensive, but it can not if you don't want to.

It's much more "offensive" than Objective-C, for example.

Just a "!" and it's the same as Null Pointer Exception == Fatal Error.


”the macOS Dock was rewritten in Swift in macOS 10.12, and Mission Control was super buggy for me in that version. I'm not blaming this on Swift”

Foo was rewritten in Swift, and bar was super buggy. Why would you blame that on Swift?


Mission Control is part of Dock.app.

Turns out one of these bugs even has its own blog post: https://medium.com/@julioromano/working-around-an-infamous-m...


If anything, since a lot of stuff is being rewritten from scratch, a lot of new bugs appear that were not there. At least that's my experience on the Mac with software that has been rewritten in Swift.

Obviously, it's not Swift's fault for that, but even if people at Apple make rookie mistakes such as using force unwrap operator, it tells something about the language.


> Obviously, it's not Swift's fault for that, but even if people at Apple make rookie mistakes such as using force unwrap operator, it tells something about the language.

It's still often suggested by Xcode if you use an optional which has not been unwrapped in any way. "Click here to make error go away" adds an !.


Good point. Fortunately this fixit was improved over the summer: https://forums.swift.org/t/resolved-insert-is-a-bad-fixit/10... I believe it made its way into Swift 4.2/Xcode 10.


Good stuff. Now to fix my only other pet peeve which is declaring IBOutlets with a ! by default. In the ideal world IBOutlets would be checked compile time unless they're declared with a ?


Well, they have to be Optionals one way or another, because they can't get hooked up until after init[0], and Swift won't allow that. But yeah, I imagine this is on someone's todo list somewhere. IB already knows whether things are connected properly, and actually so does the source editor, since it shows the little circles in the gutter.

--

[0] Unless you could invert the order of the unarchiving...not sure what difficulties that would cause.


If Apple's own developers are just slapping '!'s on optionals to make errors go away, I would be seriously concerned.


Well hiring practices in the US tend to be skewed toward people knowing a ton about data structures and algorithms versus people having years and years of practical knowledge programming things. I wouldn't be surprised. I've seen Swift code built by pretty smart people that was one big exclamationfest.


Can you explain force unwrap?

Actually I was wondering with Swift had a lot of these things I don't see much in other languages. Optionals for example. Is it because it us focused on coding highly responsive UIs where different states might not yet be available?


It is not much in other languages because those languages are old. A lot of the Swift features have been in more academic functional languages for a long time. However it has taken time for the mainstream languages to adopt these ideas. The industry moves much slower than academic niche languages.

But most new languages has some variant of this stuff: kotlin, scala, rust.

I program a lot in Julia which is a script language and it does not have allow you to use null freely either. May seem odd for a language that does not type check at compile time. But it actually makes sense.


Force unwrapping is what it sounds like: you take an optional, and forcefully convert it to a nonoptional. Obviously there must be a valid value at this point in the program, or it will crash. Optionals are useful not only for UIs, but for handling any sort of state where it's possible for something to not have a valid value at any point in time. It just hoists such a state into the type system rather than checking this at runtime.


These bugs are likely to be of a lower average severity though, no?

C-family languages let you take off your whole leg with the slightest mistake.


No, clearly not. Take a look major components rewritten in recent years, such as Xcode code editor, new build system, Dock, those are crashing way more than their ObjC/C counterparts for whatever reason. A crash is a crash. You may claim that errors here are “less fundamental”, but the fact remains that the new software crashes more than the old.


Right. Users don't give a hoot what language the software is written in, what design patterns you're using, if you're functionally immutable. The code is there to serve a purpose. If it crashes, it fails; the root cause doesn't really matter to the user.


This feeling that “man dock.app is crashing more” followed by a few otool queries showing more Swift libraries gives me a vague suspicion that Swift rewrites have been leading to more end user noticeable crashes. I’d find it interesting to see some actual statistics on it in the form of “x crashes per 1 hour of program usage”. Perhaps it’d actually be lower than Obj-C but it doesn’t feel that way. Is it due to Swift error handling or just to rewrites? This would be an ideal time to get such statistics ! As it’s not often a giant platform starts switching to a new language and offer the chance to gather real stats rather than bikesheading or doing undergrad cohort experiments would be very interesting.


Sure, if it's a good old whole-program-crash either way, that means they're equivalent severity. It's not my intention to softball Swift with some nebulous idea of what counts as a proper 'fundamental' bug.

In defence of Swift, it could still be that they spent less time bug-hunting than when they were developing in Objective-C, to reach the same, or inferior, level of program correctness.

Also, we're comparing a new product to a mature one. Were the original Objective-C programs stable in their first release?


My point isn’t to say Swift is bad. I personally don’t like it, but that isn’t the point. In my original comment, I wanted to say that rewriting software just because of a new language is probably a bad idea.

Apple could have continued using the existing software, rather than just throw away stable code for the sake of new devs to use a new language.


Agree completely - I'm not convinced that Swift is so much better than Objective-C that it makes sense to throw out mature production code.

Mandatory link to Spolsky's blog post on exactly this:

https://www.joelonsoftware.com/2000/04/06/things-you-should-...


You are basing that on nothing.


Glad to see Apple moving away from ObjC (when it makes sense to).

This seems to contrast MS when they released .NET but shied from using it in their own systems (though I might be wrong)


Because especially in its infancy .NET was quite slow for the machines of the time (they somewhat fixed that up in later releases); it also meant a full rewrite most of the time (COM interop is well supported, but still rather nightmarish at times). Given the paramount importance Microsoft gives to back compatibility, I understand why they took their choices.


Apples and oranges. .NET is a higher-level framework for JIT languages (mainly C#). Only last two years (almost 20 years after its inception) has .NET Native come into the picture.

The performance difference between the garbage collected runtime and its C predecessor was too much and initial attempts at the rewrite in C# with Longhorn were a major fail.


SQL Server Management console, Visual Studio, Windows Phone 7, Silverlight.

Then there was Longhorn, which failed more due to internal politics than due to technical issues, as Midori later has proven just to be killed by management, as decribed by Joe Duffy postmortem reports.

WinRT/UAP/UWP are built on top of improved COM, and .NET has a native personality for them, using AOT compiler for WinRT/UAP (MDIL) and UWP (.NET Native).

Most of the new UWP stuff is written in .NET Native, with C++ taking care of the UWP runtime infrastructure, and the DirectX composition engine (Visual Layer).

The majority of Windows UI team UWP and FluentUI demos are usually done in .NET Native.


Actually most of the “new” (last three years) UWP Windows 10 UI is C++, and there is more JavaScript/HTML than .NET Native. The Xbox dashboard is .NET Native though and all of us on the Windows UI team are jealous.

The problem that we have is that we have so much infrastructure around C++ that it is expensive to switch. Even if you get faster compile times or theoretically less memory management bugs, you may lose the productivity in other ways if the rest of your tooling isn’t there.

With C# you definitely pay a small cost in memory consumption and a decent cost in binary size vs. C++ though, which still matters.


That being the case, it appears to me that the old stand WinDev vs DevTools is still ongoing, despite all reorganizations that took place during the last couple of years.


Thanks for these examples. Do you know since when was SQL Server Management Console done in .NET? (to be fair the last one I used was around 2008)

Silverlight was more of a platform than an "internal product" and it was based on .NET so it was only natural (it doesn't seem to have taken off though)

And yes .NET makes COM usable (well it is usable using MS libraries in C/C++, otherwise, forget about it :) )



Actually Visual Basic made COM usable :)


I would say Delphi did it better, until VB 6.0 that is, but then again I am biased. :)


Is the ':)' trolling? grin

IDispatch and variants were/are an abomination.


I'm not. Objective-C is a beautiful and amazing language. They should have done Objective-C 3.0 instead of Swift.


There was Paint.net, and they did use god awful WPF in Vs 2010 iirc


It doesn't seem to be by microsoft: https://en.wikipedia.org/wiki/Paint.net


Yeah if your are referring to paint.net in the app store the developer is not Microsoft.


I stand corrected, thanks


Would be nice to know what share of total binaries that represents.

On a side note, iOS 12 was great but seems riddled with memory leaks that eventually require a reboot every few days.


Have had it since release and iOS 12 is faster for me than iOS 11. Nothing but good things to say about this release. I'll jump on the "are you sure it's not an app" train.


I've only noticed some UI glitches in some of the apps i am using including ones i personally developed. I can only speak for the later, but it was due to scruffy code which never should have worked as expected.


I saw a small glitch where Apple Pay was offered as an auto-complete on the predictions bar in the 3rd slot but the actual auto-complete prediction was also in the same slot so the two things were on top of each other. Not sure if that was in the app or what but that's the only issue that I've personally seen so far since release.


I’ve been running iOS 12 since the early betas and only rebooted for OS updates. Sure it’s not an app or something?


There's one particular mobile game that is suspicious... However, this issue was nonexistent with iOS 11. The memory diagnostic app shows the "Other" section constantly growing and not being released.


> The memory diagnostic app shows the "Other" section constantly growing and not being released.

Are you sure you're not talking about storage space? I don't know what memory diagnostic app you're talking about.


>memory diagnostic app

Don’t forget to update your anti-virus too!


> The memory diagnostic app shows the "Other" section constantly growing and not being released.

Sure it's not 'the memory diagnostic app' that's wrong?


If an app (even badly coded) causes memory leaks in the OS, that's a fault of the OS still. Of course fast moving OSes like Android or iOS are always going to have these kinds of bugs and it's probably not related to Swift as Swift isn't used in the kernel internals.


>If an app (even badly coded) causes memory leaks in the OS, that's a fault of the OS still.

How's that so? An app doesn't have to leak OS memory, just leak its own memory.


Well, leaking kernel memory is very different than leaking application memory, because one can be paged out while the other is wired permanently.


iOS does not swap.


It will page out clean pages as necessary, though.


Wired? All memory is always physically wired to the motherboard.

It's unsafe for the kernel to swap the memory out, because paging doesn't (usually) work while running kernel code.


Wired as in not swappable.


I write kernel mode drivers as my day job, and this is certainly the first time I've ever heard of non-swappable being called "wired".

It might be called "non-pageable", "pinned", "non-swappable", etc. Out of those, I'd prefer "non-pageable", because it's the most descriptive term for it.

Non-standard terminology is not great for communication.


I write kernel mode drivers as my day job, and this is certainly the first time I've ever heard of non-swappable being called "wired".

For what OS? Consider https://wiki.freebsd.org/Memory

Wired

    - Non-pageable memory: cannot be freed until explicitly released by the owner
    - Userland memory can be wired by mlock(2) (subject to system and per-user limits)
    - Kernel memory allocators return wired memory
    - Contents of the ARC and the buffer cache are wired
    - Some memory is permanently wired and is never freed (e.g., the kernel file itself)

OSX is derived from BSD.


> For what OS?

Check my other reply above. Did some FreeBSD work ~15 years ago, but I guess I'd already forgotten.


> I write kernel mode drivers as my day job, and this is certainly the first time I've ever heard of non-swappable being called "wired".

Really ? Because it's literally called that in macOS Activity Monitor app, the term is also used in Apple's kernel documentation and API's (e.g. in the `mem_and_io_snapshot` struct defined in 'debug.h' in Kernel.framework). It's also mentioned in the Kernel Programming Guide.


Interesting.

Never did macOS drivers, just bare metal, Windows and Linux so far, macOS is not very big in our niche. Weird that Apple's terminology is so different.

Typing this on a Macbook Pro, and I can see that Activity Monitor does mention "wired" memory.


I did a little googling and I see the terminology is also used in the Mach microkernel, which macOS's XNU kernel was built upon. It looks like it's not something Apple came up with.


It's a MacOS thing. "Wired" confused me also initially. All it means is pages marked as "Wired" can't be offloaded to swap.


There are thousands of binaries in iOS 12, so this is at most a couple percent.


I haven't rebooted in the week I've been running it. Probably a specific app.


I haven’t noticed memory leaks, but I haven’t been checking.

What I’ve noticed is that everything is faster. Until it suddenly isn’t. And then swiping between home screen pages takes approximately one full second and only shows one intermediate animation frame. In other words, it suddenly is unusable. Background app chewing up resources? No idea. I just know it didn’t do it in iOS 11.

Oh, and this was throughout the beta. It’s been decreasing as time goes on though.


Whether related to an app or solely to the OS, as well tested internally as new releases may be, it's still a new major release. Things'll iron out soon enough I'm sure.

Love, - guy who writes software and knows that all software is terrible


All software is terrible and not all bugs will be (or can be) ironed out.

It really amazes me how people are always forming ideological groups - C vitamin is the best cure for everything. C vitamin is useless, just forget it. iOS has memory leaks. No, you are holding it wrong, iOS 12 is good.

And here at least we are a techy bunch, but outside the tech circle, how people treat software is just astonishing. Like the guy who made photographs sitting on the passenger seat, because his Tesla is a full autonomous car and it is perfectly capable of driving by itself.


What's the solitary binary that used Swift in iOS 9?



A bit useless. Are they using it in the top ten (mail, web, cal, clock, etc.)? How much of Swift compared to ObjC code? Those are the real answers I would expect to read in your article.


And all those answers are IN the article.

>Are they using it in the top ten (mail, web, cal, clock, etc.)?

No.

>How much of Swift compared to ObjC code?

Much less.

It makes sense too. You don't rewrite perfectly good programs in a new language just for the fun of it.


>perfectly good programs

The Holy Grail of software engineering


it is indeed used in a few top 10 apps


Apps that predate Swift will likely contain little to no Swift. There is no good reason for it to be otherwise.


I rewrote a commerical app to Swift and saw a lot of advantages. The rewrite uncovered a number of bugs due to the stricter type checking, and general strictness (no unitialized variables e.g).

It also allowed us to get more people involved in development. People new to apple development found Objective-C to be a bigger barrier. Odd syntax and a bit old fashion.

Mind you I quite liked Objective-C. But it seems a bit pointless when you got Swift.


dock is one of the oldest apps on macOS, was rewritten in swift when it was announced.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: