The job of a software architect  is difficult, just like almost every role in software development. They have to keep track of many subtly interacting quality attributes, often on multiple projects, any one of which may be too big or evolving too quickly to meaningfully keep in mental cache. To make matters worse, architects don't have near the level of tool support - compilers, static analysis tools, auto-completion - available to developers. They are much more reliant on experience, awareness, intuition, and heuristics.
In light of this, it's interesting and useful to consider what tools are available to help architects. In particular, I want to look at the role of testability in the architect's job, and to try to show how it can serve as a meaningful proxy for other, perhaps more important qualities in a software system. Testability is a quality that can promote the health of other desirable qualities, and it can serve as an indicator of whether these requirements are being met. The metaphor I like to use is that testability is a kind of barometer for software architects. A barometer only really tells you the air-pressure, but you can often use this to determine if there's going to be rain. Testability only really tells you how amenable your code is to useful testing, but you can often use this to help determine if your system is modular, organizationally scalable, and so forth.
What is testability anyway?
To meaningfully discuss testability as a tool, we need to establish some definition of what it means. Like "software architect", there is no perfect answer. On some level all software is testable in that you can test it. By hook or by crook you can write some code that verifies the behavior of pretty much anything with a specification. So clearly just "being testable" isn't a sufficient definition.
At the same time, it's also pretty clear that it's simpler to test some software than other. It may be easy to test for a number of reasons. Perhaps it's easy to understand, so that you have a clear understanding of how to test it thoroughly and properly. Perhaps the chunk of code is easy to instantiate without requiring a whole bunch of scaffolding and support objects. This not only saves on keystrokes but it also has other big benefits: it isolates behavior, it may mean your tests are faster, and it generally means that your tests are easier to understand and thus maintain.
If you do a little poking around you'll find that people have hit upon certain code qualities that generally influence the testability of a piece of code.  But in the end we don't really have a "testability-o-meter" that we can point at a piece of code. There's no accepted way to assign a "testability rating" to software that tells you if code is more or less testable than other code, or even if it's "easy to test" or "hard to test". We can sometimes get these kinds of numbers for other qualities like modularity or complexity, and things like "scalability" also lend themselves to being measured, but testability isn't (yet) in that realm.
Instead, determining if something is testable is a decision that people need to make, and it's a decision that you can only make in an informed way if you understand code. And this is why my definition of "software architect" - from a practical standpoint - includes being able to understand code well at many levels. You have to be able to recognize when, say, dependency injection could replace local object construction to reduce coupling in a system. You need to be able to spot - or at least know to be on the lookout for - circular dependencies between modules. And in general you're going to need to be able to do this not only with code that you're writing but with code that you only see in reviews or maybe only see described in documents.
So I've just told you that testability is hard to measure or even to define. In fact, I've told you that to make heads or tails of it you need to be an experienced programmer. On its face, then, it sounds like the cure is worse than the disease: yes, you've got complexity in your projects to deal with, but now I want to you do something even harder to make those problems go away.
On some level that's true! Gauging testability isn't simple and it's not perfect, but by targetting testability we get a couple of important benefits because testability is special.
Testability represents your first customer, your first users: your tests! Tests are very often the first place your code is used outside of your head. This means that this is where you'll first spot difficult APIs or awkward relationships that slipped through your design.
Tests force us to use code, and they force us to consider it at many different zoom levels - from unit tests to functional tests to integration tests, we get to see it all. And tests can - and should - happen early and often in the development process. This is how you get maximum benefit from them.
If you're paying attention you'll notice that I just made a significant shift in terminology. I went from talking about "testability" to "tests", from "code that can be tested" to "code with tests". I guess it's arguable that you can have testable code without actually having tests, but that seems a bit academic to me. I've gone on and on about how difficult it is to measure testability, but one of most effective and practical ways to asses testability is to simply test your code!
So for my purposes, testable code is also tested code. I won't quibble able precisely how much testing is enough, or at what level it should be done; there are plenty of other people who are happy to tell you that. 
But if your tests add value to your software system, then I'd wager that they exercise your code enough to highlight a lot of the software qualities for which testability is a barometer.