Retrospectives, Tech in the 603, The Granite State Hacker

Context is for Kings

A retrospective in software development methodology is a look back at a project cycle. Usually, a retrospective refers to a formal meeting held at the end of a development cycle. Retrospectives provide context for where you are, and can help you figure out where course adjustments might be needed.

I’m going to start a series of project retrospective blog posts. My intent is to go a bit beyond the standard “lessons learned” aspect of a classic case study by discussing ways to address those lessons learned if I were delivering on the same requirements today.

I understand there’s some risk in this. The very nature of talking about “what I’d do different today” is a dynamic target. The answer could be different tomorrow, when you’re reading this. Perhaps that’ll be another chance to revisit the topic.

In these posts, I’m going to go over my own project history and do a full post-mortem on whole project deliveries. In order to protect potential client property, I’ll generally avoid identifying the client I did the work for and/or any publicly known project name. Teammates familiar with the projects will be able to identify them, but that’s ok.

Whenever working on a longer-term, non-trivial application, it’s inevitable that before the project is completed, technical debt sets in. Most commonly, some of the components are obsoleted by newer versions of tools used in the project, and there’s no immediate need or budget to update it. As long as those tools are still supported, it’s generally de-prioritized as a problem for another day. Longer term, everything is eventually obsoleted.

Another inevitable part of code development is the classic “if I had it to do over again” aspect. This is a big-picture extension on the old “Make it work, make it right, make it fast” practice. The first time through a project, there’s always “make it work” moves that end up being what gets delivered, without opportunity to get to the “make it right” / “make it fast” cycles. If I had the project to do over again, I’d “make it more right” or “make it faster” by doing ‘X’.

Likewise, every project cycle has its ups and downs in terms of team interaction. Some times folks are on the ball, and the code flows. Other times, there’s what we call blockers… things that hold up progress.

A blocker could be any number of things.

A common blocker is missing or incomplete requirements. It’s hard for a programmer to teach a computer to do work if the programmer doesn’t know how to do that work.

Another common blocker is access or permissions. A programmer might have a requirement to develop code that depends on another service. If that’s the case, the programmer might be able to build an abstraction of that service, but eventually will need permission to access that service in some form to do integration testing.

In this case, I’ll still take the classic retrospective approach. I’ll address the following questions:

  • What did we set out to do?
  • How and why did we deviate from plan?
    • What worked?
    • What didn’t work?
  • What would we like to do next time?
    • How could we improve what worked?
    • How will we minimize the risks of what didn’t work?
    • How will we address any technical debt incurred?

Some retrospectives avoid management in order to avoid addressing these questions politically. In better situations, some representative of management and/or project stakeholders are included in order to get a more complete view of the project.

In my posts on the topic, it’ll just be my perspective, and I’ll be addressing the questions holistically. Having completed the projects and disbanded the delivery teams, with minor name changing to protect the innocent, politics need no longer apply.

I’ll admit, there’s some self-gratification in doing these post-mortems publicly. I’ll be able to show off experience, and hopefully grab the attention of folks who’d like to produce similar projects. My hope is to inspire more of the latter and spark up the conversation around how to apply similar solutions (preferably with the “desired state” improvements).

Tech in the 603, The Granite State Hacker

Back To Where We Belong

Big changes for me lately, so more of a narrative personal post than a technical presentation…  and with apologies. I’ve been too wrapped up with the finer details of finishing up my project with Fidelity to get any blog posts out at all since November last year (yikes). 

I’ll get back to the regularly scheduled broadcast shortly.

About this time two years ago, I started a project with a financial firm out in Back Bay, in the John Hancock tower.  We rolled out SharePoint 2013 on-prem, and migrated it from multiple legacy farms… both of which were WSS3.0.   So between the build out and migration, I ended up on that project for a solid three months or so.  Not a bad gig, but… Boston commute (thank God for the Boston Express Bus), and no C# (not even ASP.NET).  It was really more of an IT Pro gig than development.  I was able to do some really fancy powershell stuff to manage the migration, but definitely not my first choice.

That fall, I landed a role on Fidelity’s new order management/trading desktop build-out. I have to say, that was roughly the kind of project I’ve been angling for, for years.  

Technology-wise, the only thing that could have made it any better was to be Windows 10/UWP based, maybe with mobile tendencies.  Alas, building rich clients of any sort is relatively rare, so .NET 4.6 is better than… not .NET at all. 

Still… All C#/WPF, desktop client…   

A good number of my most recent technical posts and presentations were heavily influenced by this project.   Sure, Boston again… for a good portion of it… but they let me work at Fidelity’s Merrimack location for a good chunk of the latter half of the project, too.

And it was maybe the longest running single version project I’ve been on in my life.  I was on it for 18 months…  a full three times longer than a “big” six month project.

All that to say I haven’t spent a full, regular day at a desk at BlueMetal Boston’s Watertown office in just about two full years. 

In that time, I’ve seen too many great teammates move on, and about as many new teammates join us.  We were employee owned back then, and I rode out the entire Insight purchase while “living” at the clients’ site.

Still, one thing that bothered me is a pattern I don’t intend to continue repeating.  When I took my position at Jornata that became BlueMetal, I accepted the title of Senior Developer, even though it appeared to be a step down from my Systems Architect title at Edgewater. 

My reason for accepting that was primarily that I was joining Jornata as a SharePoint make/break, and I needed to get a little more SharePoint experience under my belt before I was comfortable calling myself a SharePoint Solutions Architect.   By the time the BlueMetal merger worked itself around, I realized my options had opened broadly.  I got the SharePoint experience I needed, but it became very apparent that it wasn’t the experience I wanted.  Unfortunately, I was stuck with the “Senior Developer” title even on projects where my depth of experience went much deeper. 

SharePoint is cool for using, and even integrating, and IT Pros get a lot of mileage out of it, but as for software developers… well…  let’s just say I let SharePoint fuck up my career enough that I had to re-earn my “Architect” title.  (pardon my language, there’s no better word for it.)  I always deliver a win, but I’m a Visual Studio kinda guy.  I’m still happy to develop integrations with SharePoint, and support the SharePoint community… but while there’ve been major improvements in the past few months alone, Microsoft has been muddling it’s SharePoint developer story for years, and I let myself fall victim to it.

Thankfully, the Fidelity project did the trick…  it was just the level of high-touch, real Enterprise application development that I needed to earn my self respect back, and prove out my abilities in the context of BlueMetal. 

I’ll admit, while I feel this is restoring my title, it is certainly not lost on me that “Architect” at BlueMetal is a class (or two) above “Architect” in any of my previous companies.  I always felt I was there, even if I felt discouraged and unsupported by my former teams.  I am truly honored to be among those who’ve earned this title in this company, and very appreciative of the recognition.

At BlueMetal, I’m supported and inspired by my team, and really seeing this as validation that my career vector is now fully recalibrated.

I’ve said this before: meteorologists are very well educated with lots of fancy tools to help them be more accurate, but reality is that unless you’re standing in it, you don’t really have much hope of getting it truly right.  I have no intention of becoming a weatherman architect.  Hands-on the code is where my strength (and value) is, so that’s where I’ll always shoot to be.

Tech in the 603, The Granite State Hacker

Intro to Rx.NET (Reactive Extensions)

Thanks to the gang for joining me at the Microsoft Store in Salem NH for my preso on “Intro to Rx.NET”   Being that it’s a toolkit I’ve been digging a lot at work lately, I had a feeling folks might appreciate a broad brush into to it.

Please check out the Granite State (NH) Windows Platform App Devs (#WPDevNH) on meetup.com to connect with the group and maybe even participate, yourself.  In addition to the core presentation topic, we had a great debate in speculation on how Microsoft’s purchase of Xamarin might settle out.  Also, I’ll be attending Build 2016, so we’re talking about having a special meeting early in April to recap and consider future presentations. (stay tuned!)

Rx reminds me a lot of other declarative language elements (XSL, XAML) in that it seems really natural, then you start looking at more advanced stuff and the complexity becomes boggling… then you start to really understand the abstractions and it feels natural again.

Without further ado, here’s my slides for the presentation:

[office src=”https://onedrive.live.com/embed?cid=90A564D76FC99F8F&resid=90A564D76FC99F8F%21824879&authkey=AC01r6jwJX5ng5M&em=2″ width=”402″ height=”327″]
I’d like to thank the folks at http://IntroToRx.com, I referenced them more than any other source putting this together.
Finally, for the code I demoed, please check out the post I mentioned, here:
Hope to see you soon!
-Jim Wilcox
The Granite State Hacker
Tech in the 603, The Granite State Hacker

Rate Limiting Events with the Reactive Extensions Rx

A fun little challenge that came up at work…  we have a stream of events that we publish / subscribe via a Reactive IObservable.   One of the things we’re testing for is swamping the subscribers with events for periods of time.  The tools in System.Reactive.Linq namespace include utilities like .Throttle(…), .Sample(…).  None of these supported our needs.

For our needs in this particular event stream, in this point in our stream, we can’t afford to drop events.  

Sample(…) picks off an item at various intervals, dropping the rest.  Throttle(…) sets up a window of time. It won’t release any object until there’s been a case where only one object was buffered in the given time window.  If you get another while the window’s open, the window widens to the original timespan.

Then there’s .Buffer(…) which can store event objects for a window, and then release them.  That amounts to releasing all the events in periodic bursts, but it’s not a rate limit.

Finally there’s the .Delay(…) method… which, ironically, delays publishing objects by an offset amount…  but that delays all objects by the same time offset.  If you have three events come in, 1 millisecond apart each, and put a 1 minute delay on them, they’ll enter the collection, and one minute later, will be published out… in a three-millisecond burst.

I want to be able to constrain my publisher such that I only want n number of entities per second. 

My solution separates the pub/sub.  It loads a queue with incoming events, and emits them immediately, up to the limit on the subscriber side. On the publisher side, it resets the counter and emits any overflow objects in the queue, also up to the limit.   

Yes, this model has problems, so address your risks appropriately… (“use at your own risk”).    You can run out of memory if the origin provides more items than the limiter is allowed to emit over long periods of time.

Anyway, here’s a Program.cs with the RxHelper extension RateLimit(…).  The program has a decent little before/after RateLimit(…).



using System;
using System.Collections.Concurrent;
using System.Reactive.Concurrency;
using System.Reactive.Disposables;

using System.Reactive.Linq;

 
namespace ConsoleApplication2
{
    class Program
    {
        static void Main(string[] args)
        {
 
            // simulate very fast (no delay) data feed
            var unThrottledFeed = GetFastObservableSequence();
 

            unThrottledFeed.Subscribe(Console.WriteLine);

            Console.WriteLine("That was an example of an event stream (with 100 events) only constrained by system resources.");

            Console.WriteLine();


            Console.WriteLine("Now rate-limiting at 10 items per second...\n");


            const int itemsPerSecond = 10;
 
            var throttledFeed = GetFastObservableSequence()
                    .RateLimit(itemsPerSecond);
 
 
            throttledFeed.Subscribe(Console.WriteLine);
 
            Console.WriteLine("END OF LINE");
            Console.WriteLine("Note that the Main() method would be done here, were it not for the ReadKey(), the RateLimit subscriber is scheduled.");
            Console.WriteLine();
            Console.WriteLine("Rate limited events will appear here:");
            Console.ReadKey(true);
 
 
        }
 
        #region Example Artifacts
        private static IObservable<TestClass> GetFastObservableSequence()
        {
            var counter = 0;
            var rnd = new Random();
            return Observable.Defer(() =>
                Observable.Return(0)
                    .Select(p =>;
                    {
                        counter++;
                        var x = new TestClass(30.0, counter);
                        x.Value += Math.Round(rnd.NextDouble(), 2);
                        return x;
                    })
                    .Repeat(100));
        }
 
        private class TestClass
        {
            public TestClass(double value, int instance)
            {
                Value = value;
                Instance = instance;
            }
            private int Instance { get; set; }
            public double Value { get; set; }
            public override string ToString()
            {
                return $"{Instance}: {Value}";
            }
        }
        #endregion
    }
 
 
   

    internal static class RxHelper
    {
        public static IObservable<TSource> RateLimit<TSource>(
            this IObservable<TSource> source,
            int itemsPerSecond,
            ISchedulerscheduler = null)
        {
            scheduler = scheduler ?? Scheduler.Default;
            var timeSpan = TimeSpan.FromSeconds(1);
            var itemsEmitted = 0L;
            return Observable.Create<TSource>(
                observer =>
                {
                    var buffer = new ConcurrentQueue<TSource>();
                    Action emit = delegate()
                    {
                        while (Interlocked.Read(ref itemsEmitted) < itemsPerSecond)
                        {
                            TSourceitem;
                            if (!buffer.TryDequeue(out item))
                                break;
                            observer.OnNext(item);
                            Interlocked.Increment(ref itemsEmitted);
                        }
                    };
                   
                    var sourceSub = source
                        .Subscribe(x =>
                        {
                            buffer.Enqueue(x);
                            emit();
                        });
                    var timer = Observable.Interval(timeSpan, scheduler)
                        .Subscribe(x =>
                        {
                            Interlocked.Exchange(ref itemsEmitted, 0);
                            emit();
                        }, observer.OnError, observer.OnCompleted);
                    return new CompositeDisposable(sourceSub, timer);
                });
        }
    }


}
 
 

Edit 1/27/2016:  Had to tweak RateLimiter(…) to immediately emit objects as long as it hadn’t hit it’s limit for the time span.  It always queues, just in case, to maintain order.

Tech in the 603, The Granite State Hacker

Live Process Migration

For years now, I’ve been watching Microsoft Windows evolve.  From a bit of a distance I’ve been watching the bigger picture unfold, and a number of details have led me to speculate on a particular feature that I think could be the next big thing in technology….   Live process migration.  

This is not the first time I’ve mused about the possibility… [A big feature I’d love to see in Windows 11] it’s just that as I work with tools across the spectrum of Microsoft’s tool chest, I’ve realized there are a few pieces I hadn’t really connected before, but they’re definitely a part of it.

What is live process migration?  Folks who work with virtual machines on a regular basis are often familiar with a fancy feature / operation known as live virtual machine migration….  VMWare’s vSphere product refers to the capability as vMotion.  It’s the ability to re-target a virtual machine instance, while it’s running… to move it from one host to another.

In sci-fi pseudo psycho-babble meta physio-medical terms, this might be akin to transitioning a person’s consciousness from one body to another, while they’re awake…  kinda wild stuff.

As you can imagine, live VM migration is a heavy duty operation… the guest machine must stay in sync across two host computers during the transition in order to seamlessly operate. For the average user, it’s hard to imagine practical applications. 

That said, live process migration is no small feat either.  A lot of things have to be put in place in order for it to work… but the practical applications are much easier to spot. 

Imagine watching a movie on Netflix on your Xbox (or maybe even your Hololens), but it’s time to roll.   No problem, with a simple flick gesture, and without missing a beat, the running Netflix app transitions to your tablet (or your phone), and you’re off.   Then you get to your vehicle, and your vehicle has a smart technology based media system in it that your tablet hands off the process to.   It could work for any process, but live streaming media is an easy scenario.

From a technical perspective, there’s a bunch of things required to make this work, especially across whole different classes of hardware…  but these problems are rapidly being solved by the universal nature of Windows 10 and Azure.

Commonality required:

  • Global Identity (e.g. Windows Live)
  • Centralized Application Configuration
    • Windows 10 apps natively and seamlessly store configuration data in the cloud
  • Binary compatibility
    • Universal apps are one deployable package that runs on everything from embedded devices to large desktops and everything in between.
  • Inter-nodal process synchronization
    • Nothing exemplifies this better than the 1st class remote debugging operation  in Visual Studio.  You can run an app on a phone or device from your laptop, hit breakpoints, and manipulate runtime state (local variables) from the laptop and watch the device react in real time.
  • Handoff protocol
    • I’m sure it exists, but I don’t have a good word to describe this, but it’s probably based on something like SIP
  • Runtime device capability checking (the part that sparked this blog post).
Over the years, there have been a lot of “write once, run anywhere” coding schemes.  Most involve writing a program and having the compiler sort out what works on each type of hardware…. what you get is a different flavor of the program for different kinds of hardware.  In Windows 10, it’s different.  In Windows 10, the developer codes for different device capabilities, and the application checks for the required hardware at run time.  
While the UWP does an amazing job of abstracting away the details, it puts some burden on the hardware at runtime…  the app developer has to write code to check, anyway: hey, is there a hardware camera shutter button in this machine?  If yes, don’t put a soft camera shutter button on the screen, but now the app has to check the hardware every time it runs.
I struggled a bit trying to understand this latter point…  why would Microsoft want it to work that way?  Except for a few plug & play scenarios, it could be optimized away at application install time…  unless your process can move to a different host computer/phone/console/tablet/VR gear.
While I am (more recently) a Microsoft V/TSP working for BlueMetal, an Insight company, I have no inside information on this topic.  I’m just looking at what’s on the table right now.   We’re almost there already.  Yesterday, I showed my son how to save a document to OneDrive, and within moments, pick up his Windows 10 phone and start editing the same document on it.
In my mind, there’s little doubt that Microsoft has been working its way up to this since Windows Phone 7… the only question in my mind is how many of these tri-annual Windows 10 updates will it be before “App-V Motion”-style live process migration is a practical reality.
Tech in the 603, The Granite State Hacker

A big feature I’d love to see in Windows 11

With all the announcements coming out of //Build, I’m pretty jazzed about what’s coming in Windows 10.   That doesn’t stop me from wishing there were one or two other scenarios Microsoft would get to… and at this point, I’ll have to hope to see them in something after Windows 10.

“App-V-Motion” running apps, migrating them across devices. 

Enable an app running on the phone or tablet or laptop or desktop to seamlessly transition from device to device.

Imagine it’s getting late in the day…  you have a long running process on your desktop that you need to babysit.  Poor timing, sure, but it happens far too often.   Now, rather than being tethered to your desk, you can transition the process to a mobile device, and simply take it with you.   Perhaps it’ll take longer to complete on the mobile device, so when you get home, you hand it back off to bigger iron. 

or, my other favorite scenario…  you’re watching your favorite movie, but it’s time to roll…. so you hand off the movie player app to your phone, and keep watching while you’re on the go, without missing a beat.

With cloud configuration & storage, this scenario is getting more and more feasible, but given where I’m seeing Windows 10, now, this could potentially be a 10.1 or 10.2 feature.

Tech in the 603, The Granite State Hacker

Reliving “Revolutionary” with Windows 8

“What do you think of Windows 8?”   I hear this question all the time… everywhere I go.   I hear people talking about it on the bus, in line at coffee shops, and even in odd places like hospital rooms.  It’s the biggest change we’ve had in the PC in well more than a decade.  Everyone knows this is as big as broadband in everyone’s home.

But… more than a decade?   Really? 

Definitely.  How old would a child be if it was born the last time there was a *true*, major version iteration of Windows?   3?  8…? 

How about…  18?   Yeah…  18… old enough to drive.  Old enough to be looking at colleges. The Daytona (Windows NT) / Chicago (Windows 95) user experience, were it a child, would now be looking at an opportunity to suffer the choice between Romney or Obama.  The experience unleashed on IT and the public introduced us to the Start menu, the Desktop, managed application installs, and several other major features that the enterprise and private user alike have now literally grown up on.

Some might argue that Windows XP was a hefty revision that almost qualifies, but I would say not so much.  Improvements abounded, but the core user experience hasn’t changed by more than revision increments in Windows 98, ME, 2000, XP, 2003, 2008, 7… really…  since Windows 95. 

But, with Windows 8, this changes.  Windows 8 brings us a whole new user experience in the “Modern UI” formerly known as “Metro UI”. 

If you recall, Windows 95 still essentially lived on top of DOS, and had enough of the Windows 3.x framework to run all the apps we’d already come to depend on (like Word 6, Excel 5, and Windows AOL 2.5).  While those programs ran in Chicago, there were compatibility issues, and the user interface really started to look crusty on legacy applications.  I was actually a relatively late adopter, waiting until Windows 98 before I finally succumbed to the dark side. (I had discovered IBM OS/2 Warp and become a fan… it actually took a 1-2 punch to Warp to get me to switch.  1:  When Warp was stable, it was unbeatable, but when it crashed it was unrecoverable, (and crash, it inevitably did).  2:   Command & Conquer / Red Alert, which had an improved video mode that was only available when installed in Windows… and it was even more awesome in that improved resolution mode. )

Just like Windows 95, Windows 8 is a transitional OS.

One of the big things I keep hearing about Windows 8 is… what a P.I.T.A. is is to figure out. “Microsoft is taking a huge risk with this… why are they breaking my Windows?”, I hear.  Or…  “I’m open-minded.  I subjected myself to it until the pain became unbearable.  (I can’t wait until Mac OS X takes over.)”

Transition, though?  Yes.  Transition.  Again, this is the first real full version increment of the Windows user experience that we’ve seen in years, and it all comes down to this Modern UI thing.  It does exactly what Windows 95 did to Windows 3.x on DOS.  It wipes the slate clean and re-imagines how we operate our computers from the ground up using modern human interface devices… (HIDs). 

Touch screen, movement, gestures, enhanced 3D graphics… these are things that started to accelerate development not long after the release of 95, but the world was still on the Windows 95 learning curve.  Hardware was too immature & expensive to develop an OS around them then… So, while you were getting comfortable with your desktop, (if you haven’t noticed) your cell phone’s user experience surpassed your desktop.

So on the surface (no pun intended) this is what Windows 8 is…  it’s a full OS-deep refresh that catches home computing back up to what people have gotten used to in their cellphones.

“Common sense” says this all implies a true P.I.T.A. for people and companies that dig in on it. 

Let’s look a little deeper, though, at what else this represents.  Again, this is a transitional OS.  It does everything the old user experience did… if you dig a bit.  It does this to support the old applications with their freshly encrusted-feeling user experience.  People can continue leveraging your old technology investments.  Indeed, you can continue making investments in the old user experience…  just know that the writing’s on the wall. 

It’s only a matter of time before people do what they inevitably did with Daytona/Chicago… adopt, extend, and embrace, or be extinguished.  

Why?  Because… when it comes down to it, the part that people really hate is not the “user experience” part.   It’s the “NEW” part that hurts.  Once the “NEW” wears off, what you’ve got left is a really genuinely cleaner, better, more efficient UI that leverages new hardware in important ways, and puts it years ahead of desktop OS competition, both in terms of capability, and even in terms of price point…  and pushes that same advantage out seamlessly to a myriad of other devices.  So getting past the sharp learning curve on one device means you’ll be rocking the new UI everywhere in no time.

Like the glory days of the Dot-Com boom, the days of Daytona & Chicago, these will be days of learning and technical renovation, even re-invention.  This is what I see coming with Windows 8 in the desktop, with an added benefit of being even more ubiquitous than it was back in the 90’s.  With the coming of Surface, Windows Phone 8, your apps will have more opportunity to run in more places, on more machines, than ever before…. using more “Star Trek” functionality than we’re yet used to. 

Those looking to remodel that kitchen… here’s your wake up call.  Windows 8’s user experience is representative of what made the Dot Com days so great… (and there were some plus sides.)  It was when leveraging any of the revolutionary new technology became a competitive advantage all by itself.  Early adopters will feel the pinch of the initial investment, but… with some planning, will reap the rewards by having that pain behind them by the time Windows 9 rolls around. 

I, for one, look forward to my new OS overlord.

Tech in the 603, The Granite State Hacker

Time to Remodel the Kitchen?

A few good reasons to consider keeping your IT infrastructure up to snuff…

http://edgewatertech.wordpress.com/2012/08/21/time-to-remodel-the-kitchen/

(I’m honored to have the post accepted & published on Edgewater’s blog.)  🙂 

Tech in the 603, The Granite State Hacker

Keeping in the Code

At the end of the day, the business solution is always the most important part of the equation, but it’s not the only part.  While I’m working on a solution, I’m also looking at tools, scaffolding, and framework.  This is especially true if others are going to be working on the project, and that accounts for nearly every non-trivial project.

How easy is it to set up?  How easy is it to work with?  Do the expressions make sense?  Can I hand it off to my least experienced teammate, get them to pick this up, and expect reasonable results?  (For that matter, can I hand it off to my most experienced teammate and expect them to respect the design decisions I made? )

Keeping my head in the code is critical.  Loosing touch with tools means shooting in the dark on the above questions.  It doesn’t matter what their experience is, if you ask someone to push a tack into a corkboard, hand them the wrong tools for the job, they won’t be able to push the thumbtack into the corkboard… or you’ll nuke your budget paying for tools that are overpowered for the job.  (But that thumbtack will be SO IN THERE!)

In any case, in most projects, after the architecture & technical designs have been sorted out, frameworks, built, automations put in place, I’ll take on the coding, too.

Of course, I’ve said this before…  if you can really simplify the work, what’s to stop you from taking the extra step and automating it?   I’m always eyeing code, especially “formulaic”, repetititive stuff, looking for opportunities to simplify, abstract, and/or automate.