Blog

  • Go watch The Boys.

    The last season of The Boys is aging on Amazon Prime right now, and if you have not gone through and watched it yet you should.

    It’s gory, and not something that you would want kids to watch but it’s dammed good and while I’m kind of sad that it’s going to be ending after this season it’s shaping up to be a banger of a send off.

    The folks at Amazon have had a couple of really good shows in the last couple of years. Reacher is a solid watch, and the Eye of the World series they did was good, and it’s too bad that it was cut short since there is a pile of material they could have run through for that series.

    Either way, if you haven’t seen it yet go get started on The Boys. It’s well worth the time.

  • VMWare and alternatives.

    Working in software I frequently find myself having to test things with my companies products and other products that our stuff is interacting with or potentially competing with. A huge tool for this has been the ability to run virtual machines on both my desktop, laptop, and dedicated “servers” that I keep around. For the longest time VMWare was the gold standard for this workflow and I was a huge proponent of their products.

    As much as I do like free software and will use it whenever I can for a very long time VMWare Workstation and ESXi Server where the gold standard and I ran both solutions at home and at work to simulate large pools of machines and to test out things that needed to be looked into.

    However, in 2022 an announcement was made by Broadcom that they were going to aquire VMWare.

    Anybody who’s been in enterprise software for a while kind of had a bad feeling at that point. Broadcom has a bit of a reputation of acquiring companies and maximizing shareholder value at the expense of end users.

    Now roll the clock forward two years and the free version of ESXi that a lot of people cut their teeth on is gone without anything to replace it from Broadcom. And sure, they have made the desktop product(s), VMWare Workstation and Fusion, free for pretty much anybody who wanted to use them there’s also no support for those products any more from the folks at Broadcom so if you have any issues you better pray that someone on the forums has an answer and that they will continue to see value in updating the product so that it keeps working properly.

    Personally once I heard about the purchase I started looking into some other options. After playing with XCP-ng, Proxmox, and a few other options I eventually landed on Proxmox as a replacement on my server side of things. Turns out that it worked well enough that it replaced the OS on not only my own servers but on all the devices that I had sitting at work that were running VMWare products on them. We still have the desktop product sitting around as it’s still a better user experience than VirtualBox, but as we start refreshing hardware I’m starting to force my staff to deal with VirtualBox just to see if we can’t make that work in preperation for Broadcom doing something to finish off killing VMWare Workstation.

    Given that it’s alaways a good idea to have a backup plan I’m probably going to toss XCP-ng on another box at home and start playing with that more just in case Proxmox decides to do someting that I can’t stomach in the future, hopefully that’s something that’s a good long way off though.

  • Ok, new camera, now for the extras.

    While my D70 did a great job for me for 20 years it recently became time for a new camera and after a lot of digging around I picked up a Nikon Z50 II to replace it.

    I was considering something in the Nikon DSLR range rather than the mirrorless cameras but after looking at the DSLR range and looking into the differences between them I really liked that the Z Series was a bit smaller and lighter and that would be better for packing the thing around while travelling since that was one of the things about the D70 – it wasn’t really a small camera.

    So, after looking at the options I ruled out the Z30 since I like having a viewfinder for composing photos but anything higher than the Z50 or Z50 II jumped the price up quite a bit and didn’t give me enough to justify paying the extra.

    Since they generally have deals on kits with these things I picked up a two lens kit that paired the body with a DX16-60mm f3.5-6.3 and a DX 50-250 f4.5-6.3 lens. I know that a lot of people poke at the kit lenses as being less than ideal the pricing of the kit was such that the lenses were almost a 2 for the price of 1 thing and it gave me some glass to put on the body to get started.

    The lenses are good enough for most of what I’m doing right now, they give me a good range of focal lengths and are fast enough that I can actually use them when I’m walking around. Now personally I prefer shooting with prime lenses, not zooms. On the D70 the most common lens for you to find attached to that body was the 50mm f.1.8 that I picked up and after a short while I found that I really missed having a fixed lens on the camera. Shortly after I picked up the kit there was a point where the Nikor Z DX 24mm f/1.7 was on sale and after taking a look at some reviews I picked that up for the kit as well.

    In the end I’m going to be looking for a couple more prime lenses, probably something in the 40-60mm range, and then something around 100mm or so. There’s lots of nice options in the line up, and I really have a eye on the MC 105mm f/2.8 VR S and the 50mm f/1.4 that they have in the lineup right now.

    The other thing that I did have to do was pickup some extra batteries and some SD cards. The batteries man, those were just wild. On Nikon’s website the batteries from them run about 100$ cad, but you can’t charge them outside of the camera unless you pick up a charger that they don’t include with the Z50II. What I wound up finding was a set of batteries from a company called SmallRig that not only replaced the OEM one but have a USB-C connector on them so that you can just plug the battery in to charge with any USB power source. That option being about half the price wasn’t hurting the decision either.

    SD cards were pretty simple, just some reasonably fast Sandisk ones at 256GB and I’m good to shoot for a very long time with those.

    The nice thing about sticking with Nikon is that the Flash unit that I have works with the new body, it’s limited to lower sync speeds and such than the newer ones but it still works as well as it did on the D70 and if I’m in a space where I need a flash head I’m probably not too concerned about the 1/60th sync time. At some point I’ll swap out to something newer but that can take a back burner until some of the lenses are swapped out or the existing flash unit packs it in.

    I’ve had the camera for a couple of months now, so in about a year I should have a handle on it and be more comfortable with the use of the device, until then I need to just get out there and snap photos as often as I can.

  • OS Choices.

    So it’s time again to reload the operating system on my gaming machine, and the question that I had to answer was what operating system I was going to run this time around.

    The options are to stick with Windows 11, or move over to some flavour of Linux for my operating system. This specific machine is primarly a gaming machine these days since anything of importance has been moved over to the MacBook Pro that I’m typing this on now. And because I’m going to be gaming on the thing more than anything else that drives me back into running a version of Windows.

    I know that gaming on Linux has come a very long way in the last few years with what Valve is doing, but every time that I try to run games on a Linux machine I wind up having to deal with fiddly bits and problems – all things that can be overcome, but all are things that I shouldn’t have to overcome at all.

    The gaming machine is essentially a toy for me. I want to sit down in front of it, click an icon and enjoy a couple of hours of distraction from my day to day. I work with Linux a fair bit at work, and run multiple systems running various editions of Linux at home for various reasons and in the places where I have it running it’s solid. Gaming though is not quite there and Windows isn’t quite bad enough to force the switch for me right now.

    Will this change later? Possibly.

    Microsoft is saying the right things about cleaning up a lot of my objections about the operating system in the last few weeks, but those are just words and we will have to see if their actions actually pan out.

    For now though, it’s a install of Windows. The question is what one?

    Well it’s some flavour of Windows 11 since Windows 10 is out of support and while using the LTSC/LTSB or IOT version might seem like a interesting idea to avoid some of the bloat getting licensing for those versions isn’t always easy for a home user, and I’ve seen a few oddities with those versions of Windows when trying to use them for general purpose use. Again, nothing horrid, but I don’t really want to be chasing down problems so Windows 11 Pro it will be.

    And it’s Windows 11 Pro mainly because that’s what I have a license for, the non-pro edition would have worked just as well for gaming but it is nice to be able to remote desktop into the machine when I’m away from home. The othe added bonus is that you can start turning off some of the annoying Microsoft Account prompts as well as turning off some of the cloud content that is generally a headache to deal with.

    Another thing to consider is that I can go from bare metal to a working OS on the Windows side of things fiarly quickly. Over the years I have had to build a number of scripts and other tools to speed up deployments. With Winget and a few other powershell commands I can get the software that I need on a Windows machine by running two commands. On a linux box I haven’t spent enough time to have that level of automation built so setting up a machine on that platform will take me longer. It’s also going to involve a lot more tinkering for something that does just work on a Windows device.

    And really these days I’m all about making things simple.

    There are enough places where I can’t get away with that and have to go through all sorts of mess to keep things secure, locked down, and safe. A gaming machine that’s going to have no critical data on it should just be something that I can play with.

  • It’s tough being interested in tech now.

    So someone in the extended family is looking at getting a new PC, and normally I’m all in to helping with that. I enjoy browsing parts, looking for deals, and otherwise just sniffing around and seeing what I can find.

    However the situation with RAM and Storage just has me shaking my head.

    I bought this same kit of RAM about a year ago for less than half of what it’s going for on Amazon right now.

    Even smaller kits are just getting stupid in price. So this time around digging into parts kits something that I would have easily been able to do for around 1000$ is suddenly looking like a 1300$ affair. And god help you if you wanted something with a GPU in the thing.

    It’s just dissapointing that for the first time in as long as I can remember the amount of computer that you can buy for a given price has dropped significantly.

  • Interesting projects and their use of AI

    I came across a video about a neat looking project today;

    https://pegaprox.com

    I run a pile of Proxmox clusters at work and home for various reasons and while they work well and the IU is functional I still have eight different clusters that I’m logging into to manage devices on.

    Yes, I know we could just put all the servers into a single cluster but there are reasons that we aren’t doing that I’m not going to get into here.

    What I did find interesting was that this presented as a open source option for consolidating the control of the hypervisors in a single location. And after watching some reviews I was looking at setting this up, but some of the reviews online were insinuating that the entire project was vibe coded by AI. I’ve played with some of those tools and while I’m not a coder/developer for a living playing with those tools has given me a distaste for vibe coded applications.

    Generative AI has a place in software development – it’s going to be able to review for common problems, audit for common security issues, and stuff like that faster than a person can read through the code. However every time that I was playing with tools that generate code they are super literal if you ask them to build something. For example I was looking at using one to build a web app to handle checking out loaner equipment for my team, and the first run of the application built out a way to check out equipment, but no way to check it back into inventory when the loan was complete. Now any of the human developers I’ve worked with would have made the jump and figured that if the device was being checked out that it would need to be checked back in – but the AI tool didn’t.

    To be fair, it built exactly what I asked it to, so you can argue that the fault here is that I didn’t prompt the tool correctly and technically you are right.

    But let’s go back to PegaProx and have a peek at their website;

    So they don’t hide that they are using AI assisted development. Points to them on that I suppose, and they do then explain their view regarding the us of AI. They classify Vibe Coding as getting into a cab and and telling the driver to take you someplace with good food, vs picking a place and using a GPS to get to your destination. In both cases you wind up getting something to eat, but in one case you aren’t deciding where you are going or how you get there at all, in the other you are the one picking the route and making the turns.

    In the end the developers of this project are saying that all the code is reviewed by humans, throughly tested, and they are taking full ownership of what’s being put out. And that’s something that aligns with my view on these tools. If what they are posting on their website is accurate then this might be something worth having a look at. However I don’t know the people behind this and this is a pretty new project – the first “release” was only back in January of this year so there’s not a lot of history here to look at.

    So, while they are saying the right things they haven’t been around long enough for me to just toss this into production right now. And when looking at one of the documents online they implied that the authors recommend using the root account for your Proxmox servers to set this up. And after poking in the documentation;

    Yep, that’s what they are recommending. And honestly that’s more of a red flag to me than the use of the AI tools is. Root accounts are not things that you just toss around, integrations like this should always be done through API’s or service accounts, much like how Proxmox does this with their Datacenter management tool. Handing over the root account for my servers to any other tool is not something that I’m really comfortable with.

    I think that this project is definitely worth watching to see where it goes. Hopefully they are onto something good and can keep building something useful here.

  • Why are supply chain attacks terrifying?

    Ok in my post on the whole situation with routers in the US I mentioned something about supply chain attacks and as someone who works in the software world those things are absolutely terrifying and I think that it deserves a bit of a deeper dive.

    So, for people that work in software, probably going to be a bit boring here, but for people that aren’t in software for a living you need to understand how software works when people are building products for you to use.

    Software can be complex, so complex that in a lot of cases people will re-use code to prevent having to rewrite things from the start each time they go to do something. This can come in a bunch of different forms but it usually winds up as being a dependency that comes into play when you go to use a product.

    So let’s say that I’m building software that tracks inventory. Details don’t matter much but at some point I’m going to have a requirement to store data for use of the product, and chances are that I’m going to be using some type of database engine to allow that to happen. So I can go and write my own database engine from the ground up and go through all the work of getting that developed, performance tuned, and ready to go.

    Or I can just install PostgresSQL, a FOSS project that already does this. It has solid documentation, community support, and a history of being able to scale in a way that’s probably in excess of what I’m going to require.

    The same thing applies to my user interface. I can build my own from the ground up, or I could use something like Gradio that provides me with a lot of building blocks to make building my UI quicker so that I’m not having to reinvent the wheel.

    Now here’s where things start to get messy. While our application only has two dependencies each of those products have their own dependencies, and each of those dependencies have their own dependencies, and each of those dependencies have potentially more dependencies that come into play. How deep is that rabbit hole?

    You can get to the bottom of that rabbit hole, and I’m sure that there are companies out there that will spend the time to do so. However I’ve worked around software developers for a long time and as sure as I am that there are companies that handle this properly I’m just as sure that there are many that do not.

    So why is this important and how does it become an attack vector?

    Let’s say that my inventory application becomes something crazy good, winds up getting used all over the place, even to the point where it’s now in government systems being used to track things that are considered very important. Let’s even say that whatever the application is tracking is important enough that there are people who are now very interested in breaking into those systems and getting access to the data that they contain.

    So now my little application is a target. Potentially one that’s worth some money to somebody who is able to break into it.

    So now that I’m a target people are poking at my software, looking anything that would allow them to get into my product and mess around. However, let’s say that for purposes of this post that I’m reasonably competent and that my app is solid enough that attackers can’t get in by going after my code. So now what do they do?

    Well my product has some dependencies, and those products have dependencies, and lets say they walk down that chain and eventually come across something used by one of my dependencies – and now by extension used by my product. Now the objective is to get control over whatever that is and try to bend it to their will.

    Perhaps they submit some helpful bugfixes that hide some malicious code. Perhaps they bribe or buy the control of that dependency from the person who is in control of it now. In either case they now have control over that dependency and by extension can now get into my chain of dependencies and start looking at ways to cause problems for my product. Perhaps they write code that steals passwords and other secrets. Perhaps they find a way to build code that lets them copy data out of my product. In either case the attacker has managed to breach my product.

    Now let’s say that I’m really good that what I do and I maintain a full software build of materials and I know that this dependency exists and is something that I have to keep track of and watch out for problems with. In a lot of cases dependencies are just used by the people building the code without a huge amount of review of the actual code involved. That type of review is something that’s fairly labor intensive. So even if I am crazy good and willing to put the time in to review the code there’s still a chance that it might be outside of my area of expertise and I might not catch what was being done and flag it as being malicious.

    Let’s say that I’m as good as possible, and I’m paranoid as hell and decide to fork whatever I’m using and freeze things so that I’m not pulling in the changes from the products once I have things working. At some point there’s still going to be some problem that I run into that will require me to update the dependencies that I’m working with either because of a vulnerability or functional bug that needs to be fixed. At that time there’s still a chance that things that I don’t want get into my product.

    The software industry as a whole has processes in place to try to catch this stuff, but there’s still a lot that just seems to run on trust, and some programs pull in a lot of libraries for simple things. Hell, a few years ago a small utility (11 lines of code) in NPM was pulled from their repository over a dispute and managed to cause enough chaos in the course of a couple of hours that NPM wound up restoring the thing from backups to keep the world working properly. In this case the supply chain attack wasn’t being done to break into systems or steal data, it was a protest of sorts but the end result was very disruptive.

    So, even doing everything right, and being very good about the security of my product there’s still a very real possibility that I’m getting breached, and if I’m not paying attention and happen to catch what’s going on this type of breach could run for a very long time before it’s caught, patched, and no longer a threat.

    That’s why these supply chain attacks are terrifying. And if you have coworkers or other folks who don’t seem to see this as a issue that’s a pretty big red flag to watch for.

  • Sure, you are banning them for “security” reasons. Right.

    So a couple of days ago the FCC updated a list of banned telecomunications equipment to include, and I’m quoting here;

    Routers produced in a foreign country, except routers which have been granted a Conditional Approval by DoW or DHS.

    If you want to read it’s all linked on the FCC site linked here.

    Now, from what I’m aware there are no domestic manufacturers of routers in the USA. So what they have done is ban the import and sale of any new equipment that does not already have a FCC approval tagged to the device.

    If there was a legitimate concern about the security of routers that are deployed in the world why are only new devices being targeted? I would assume that the decision to ban these things is based on some legitimate history of security issues or a history of operating in bad faith on the part of these manufacturers. So if that is the situation why are the existing devices not getting flagged as a problem? Why are we not being told that it’s time to replace those devices?

    So if there isn’t a history of bad behavior what is this about? The argument as I understand it is that there are concerns about the security of the devices and their potential to be used as an attack vector rather than any indication that they have been used as such.

    Is that legit?

    Arguably yes, but without a history of bad behavior this is either the US Government pressuring the hardware vendors to move manufacturing back to the US or it’s a breakdown in the chain of trust that has allowed us to take advantage of offshore manufacturing for as long as we have.

    If you look at it every device that you use establishes a chain of trust, regardless of if you realize it or not. Let’s look at your phone, say a iPhone of some generation.

    First of all you are trusting Apple, since they built the device and the operating system that it’s running. Implicit in that is that you are also trusting everybody that Apple has trusted as part of their development and supply chain on both the hardware and software side of things. This includes the folks that manufacture the screen, storage, and the developers that write the software that makes up iOS – including any libraries or tools that they use to build the operating system.

    You would think that this is fairly simple but the supply chain for software and hardware gets really complicated, really quickly. If you look at the news there have been all sorts of supply chain attacks showing up in the news recently like the one below;

    https://snyk.io/articles/poisoned-security-scanner-backdooring-litellm/

    The general idea is to look at software libraries and service providers that your providers make use of and attack those instead of coming after you directly. The impacted software library that I linked above is downloaded somewhere around 3.4 million times per day, and this attack was live for about three hours. Assuming a even distribution of downloads that would mean that the people that got away with this hit 500k downloads while this was live, and who knows where they were able to get from there.

    So obviously we have to draw a line and work on the assumption that Apple is doing what’s right our example and that they have done their due diligence on things further down the chain.

    So if we take this new ban at face value the US Government has some trust issues with the router manufacturers and is taking steps to try to address those by forcing manufacturing of the devices into the hands of domestic companies that they can regulate and mandate some level of security. However the cynic in me is wondering what hardware vendors – if any – are already going through the process of greasing palms to get exemptions for their hardware in place under this program.

    And to be clear it’s going to have to be some level of exemptions going through here – it’s going to take a long time for someone to gear up to build out routers domestically in the US for consumer use considering the number of these things that are sitting in peoples homes, offices, and in datacenters.

  • That didn’t take long.

    So. Apple releases the Macbook NEO less than two weeks ago and suddenly I have a email in from Microsoft through the Windows Insider program that’s talking about the company commitment to Windows quailty.

    There has been a lot of chatter about the quality of Windows taking a shit over the last few years. So let’s see what they are saying in the message that was just sent out.

    Ok.

    So that’s not a huge thing for me but I know that a lot of people were pissed off when you pulled that functionality in Windows 11 with no really good explanation why that needed to happen. This isn’t really a huge issue for me, but let’s see what else is in here.

    Ohh… And. Here. We. Go.

    This is one of the key reasons that I loaded up a copy of Windows 11 LTSC on my computer the last time that I had to do a reload of the OS on my desktop system. To be clear, what I need the operating system on my computer to do is provide me with a entry point to run applications and manage files. I absolutely do not need the operating system for my computer to include out of the box a pile of LLM based bullshit that just consumes system resources.

    I get that Microsoft wants to add AI features into the system, and I can understand that people might want to have them but let’s do this – make them a optional download that you don’t have to install instead of being baked in and non-removable by default.

    And while we are at it how about we take Windows Recall and stop building spyware and tracking tools into the freaking operating system. The last thing that I want is the operating system tracking everything that I’m doing and providing a catalog for people to dig into my activity on the device upon request.

    Let’s see what Microsoft’s definition of “Craft” and “Focus” turn out to be.

    Ok, I’m ok with you streamlining this, and skipping updates during setup might be helpful in some cases, but what about just making it so that the Windows updates don’t completely screw over the operating system when you load them?

    Given that you just posted news a few months ago that you would be pre-loading file explorer in the background to try to improve the performance of the thing I’m sure hoping that you are going to do something other than trying to play shell games with when things are happening.

    From what I’ve read online Windows Explorer switched to using different programming models between Windows 11 and Windows 10 and that’s where a huge chunk of the slowdown came from, and if that’s the case I would really like to know why they bothered with that change. File explorer just needs to let me get at my files. I don’t need any entry points for AI crap, I don’t need it to be pre-rendering crap or changing how folders are displayed.

    Frankly, all those widgets and feeds have been shut off in my install of Windows. So as long as I can continue that process I don’t really give a crap about this one.

    Alright, so more lip service on something that we can’t really quantify.

    And now we get into some of the actual things they will be doing;

    Ok so we are going to de-bloat the operating system to try to speed things up, I’m ok with that, but you really don’t have a choice here. Memory pricing has gone from around 100$ for a 32gb kit of DDR4 to like 350$ for the same kit in the space of less than a year. DDR5 is just as bad, if not worse, and the cost of storage is doing the same. 1tb NVME SSD’s are about double what I was paying for them last year, and even magnetic hard disk drives are climbing in price – when you can find stock on them.

    A lot of people, myself included, are looking at the price of memory, disks, and even gpu’s and are going to be deciding to postpone hardware upgrades. My desktop machine is a little long in the tooth right now, but it’s still running everything that I want it to as well as I need it to and unless that really changes I don’t think that I’ll be building out a new system any time soon. Even my laptop is a little older now, but a M3 Pro MacBook is still more performant than I need.

    Even if I was having problems with performance I would probably be looking a trying a copy of Linux on my system rather than going and just grabbing another piece of hardware. Given all the BS stuff that’s crept into the OS over the years I have the feeling that a linux install might perform better than what I’m seeing on the Windows side of things.

    Frankly I don’t care what you call the underlaying frameworks. Just make the dammed thing work properly and make the thing responsive enough that I’m not sitting staring at things waiting for stuff to load out.

    Frankly rolling back to Windows 10 would probably fix most of the issues that I have. But sure, let’s see what you can do to make the file explorer more livable.

    Frankly if I want Linux, I’ll logon to the system(s) that I have running that operating system. I have used WSL in the past but honestly I have always found it to be kind of an odd duck. Any time that I found that I needed to do something that I could do quickly in a Linux terminal on a Windows machine it’s generally taken less time to just re-do whatever I was doing using PowerShell or a plain batch file than to turn on WSL.

    I’m sure that there are folks out there that do make use of this thing but I’m not really one of them so big meh from me on this one.

    Ok this one is going to be fun, let’s see what you have in mind.

    Don’t care. You barely listen to what people are saying we want so this doesn’t really matter to me. Start doing things that prove you give a crap about what’s going on and let’s see where this goes.

    Oh please, let’s get this stuff done. I have had nothing but issues with all my windows laptops for years when I’m pulling them on and off docking stations. These power issues are things that I have NEVER had with any of the macOS devices that I’ve used.

    And yes please start holding your hardware partners to a standard where their drivers work better. There’s been piles of examples where NVidia’s drivers have shit the bed in the last few years, and I remember having all sorts of issues with the AMD GPU drivers when I was running the RX580 – I don’t think that I ever let that computer sleep while the GPU was installed. If I did the dammed thing would never come out of a sleep state.

    Granted you have a harder fight here than Apple does. Apple controls all the hardware that’s going into their devices out of the box, where you don’t have that level of control outside of the Surface devices. I suppose that you are getting closer with the ARM based laptops, but most of those are still being built by other OEM’s that you don’t have the ability to completely dictate terms to, but come on it’s been years of this and frankly it’s one of the major things that drove me to picking up a MacBook for my personal laptop last time around.

    Ok, so Windows Updates aren’t a thing that bothers me that much, but most of the time don’t you only push updates out at the start of each month?

    I don’t have much of a problem with the Windows Updates needing to be installed and I don’t find the general process to be that invasive for the most part. However I do remember the days of Windows XP where people would go years or longer between running updates and the resulting mess that type of approach would cause.

    Don’t really care. Not a huge fan of biometrics being stored on my device given what you have been doing with Windows Recall. That’s not information that I’m sure I want to trust you with right yet.

    Again, don’t much care. Just pick something and stop screwing around with the dammed layout. All that I need the start menu to do is point me to where to load a application and access settings.

    Ok that might be interesting. Most of the time I wind up leaving the computer with the notifications just turned off because they are at a point where they are really quite distracting.

    Search has been crap for years, anything that you do would probably be helpful here but personally I’m generally aware of where things live on my computer well enough that I don’t frequently have a lot of use for searching for random crap on the machine.

    If you want to have a look at the full message that they put up you can see the thing here.

    It’s a lot of the right type of noise but you know that this is being done now because Apple has suddenly dropped the entry point for someone to jump into their ecosystem considerably. And say what you will about Apple their user ecosystem and user experience across their devices is solid. The limiting factor to getting into there has always been the “Mac Tax”. You can argue the value of the macOS hardware compared to Windows devices at the same price point but the simple fact is that your point of entry into a macOS device has always been higher than that of a Windows one. But now I’m not aware of many, if any, Windows devices at the price point of the MacBook Neo that have the same build quality and user experience that you would get from the Neo.

    Actions speak louder than words, so we will see what comes up over the next couple of feature releases. If it’s all more AI packed slop, well it might be time to start looking at moving over to Linux for my gaming machine.

  • Choosing a Storage Server for home use.

    One of the things that I have spent some time playing with is storage solutions for deployment at home. My primary use for a storage server is to store backups from my other systems, media for streaming in my house, and a general dumping ground for files that need to move around machine to machine.

    The obvious solution was to go pickup something from Synology, Qnap, or one of the other vendors who build solutions purpose built for this type of thing and deploy that. However a lot of those solutions were at the time priced more for business use rather than deployment at home. As well I got bit in the past with a Western Digital MyCloud that had somewhat lacklustre firmware updates over time resulting in my decision to go shopping around for something a bit more robust for a solution.

    Ruling out a purpose built device means that I’m building my own server and the choice comes down to what software is going to be backing the thing once it’s deployed. The only package that I found at the time that would handle the mix of random drives that I had sitting around at the time was Unraid. Unraid’s method of data protection uses a non-standard type of RAID that uses the largest disk in your array to provide protection in the event of a disk failing in the array. Unraid has a good description of the protection they have in place in their user documentation.

    Looking at TrueNAS the ZFS pools that it uses have advantages with from a data resiliency standpoint, from a performance standpoint, as well as the ability to pull off some neat tricks with snapshots. However it doesn’t really handle the mixed pile of disks that I was working with at the time, something that was fairly important since I didn’t want toi go out and buy a pile of new disks when I had plenty sitting around collecting dust.

    Another thing that people will probably be quick to point out is that Unraid is not a free solution. When I bought the software it was a single purchase based on the number of drives that you are looking to attach to the server. Pricing is similar now but unless you purchase the “Lifetime” version it doesn’t come with lifetime updates, so that’s something that you should be aware of. From my viewpoint I personally prefer free solutions wherever possible however paying out for Unraid got me up and running quicker than I would have been able to if I was going to try to roll my own solution.

    Getting Unraid installed on a USB key is fairly straightforward and once you boot the machine that you are going to use as a storage server from the USB device you just select the drives to add to your array, setup your shares, and you are basically up and running. You can get more stuff running as you start bringing containers into the picture but for basic file services it’s not much more complicated than that. Swapping out drives to increase capacity or when one dies is dead simple, provided that your parity devices are working properly.

    I’ve run this Unraid server now for a number of years and it’s done pretty much everything that I have needed for a long while, however there are a couple of things that might be nice to have that TrueNAS does do out of the box. The ability to replicate snapshots between two servers, as well as the ability to upload copies of the data to a cloud storage provider makes backing up things much simpler than what’s in place in my Unraid server right now. And yes, there are a number of plugins and other things that can make this type of function happen on a device running Unraid, but none of them are things that have been baked into Unraid by the developers at this point, so I’m reliant on somebody else maintaining and keeping those plugins updated and functional as Limetech does their thing.

    So now to the question of what someone reading this blog should do. That’s kind of going to depend on what you need the storage to do for you and what you happen to have laying around.

    If you don’t have a existing system that you can reuse as a server then I would have a look at a pre-built solution like what Ugreen, Synology, or Qnap offer. The pricing on the 2 and 4 bay solutions has dropped over the years to a point where you could potentially get a cheaper system to act as a host but it might not really be worth it given the extra work in stuffing the disks into a used machine that wasn’t meant to take them.

    If you have a spare machine sitting around with enough SATA ports and NVME sockets to make a usable server and you are going to be purchasing new drives to work with I would be looking at TrueNAS, if you have a pile of mismatched drives that you want to reuse then I would still go with Unraid as long as the drives that you are working with are large enough to offset the price of the Unraid license that you have to purchase instead of putting that money towards a matching set of drives.