Here’s the one-page site I put together to show off the full quality pics:
Another day, another article on S.F.’s crazy real estate market. Except it’s perfectly rational behavior: The secret is out that S.F. is an awesome place to live with plenty of very high-paying jobs, but also land is scarce and there are lots of development restrictions.
If no policies change, prices—both housing and general commodities—will continue to go up until S.F. is basically another Manhattan; if you want to live there you either must be in the upper class or able to live in a shoebox. The good news is that California is wonderful and people can live happily outside S.F. The city would just need to beef up its transportation infrastructure for an enormous commuting class, and the Bay Area will suffer the environmental implications of that.
If, however, you think there’s value in having residents from a wider range of incomes, you have to be willing to build a ton of new, and very dense, housing, including bulldozing some old areas—not every inch of the city can be treated as a historic artifact. Unfortunately plenty of lefties think that anything that’s good for rich developers must be bad for everyone else, and it just ain’t so. I highly recommend reading Matt Yglesias’s bite-sized ebook The Rent Is Too Damn High, which makes very convincing arguments that loosening development restrictions is a great idea for everyone. Lots of people want to live in S.F., and we should let them. Density is great for the economy and for decreasing the environmental impact of cars and commutes.
Renters are already living very densely packed in “single family” homes, so it’s pretty clear there would be plenty of demand for new apartments in a variety of sizes. The natural opposition to this is going to be existing owners that benefit from rising prices, but certainly a motivated majority (renters) could successfully push for expanded development. But until they realize it’s in their interest, they’ll keep complaining about a variety of things that don’t matter while being slowly forced out of the city.
Gainesville has been increasing the density of housing around the university and it seems to be pretty great to me. Until a few years ago it seemed inevitable that there would be ever increasing sprawl and student traffic, but now a lot more students can live in walking distance.
Elgg’s access control system, which determines what content a user can view, is somewhat limited and very opinionated, with several use cases—access control lists, friends—baked into the core system. In hopes of making this cleaner and more powerful, I’ve been studying Drupal’s access system. (Caveat: My knowledge in this area of Drupal comes mainly from reading code, schema, docs, and two great overviews by Mike Potter and Larry Garfield, so please chime in if I run off the rails.)
Drupal’s system also influences update and delete permissions, but here I’m only interested in the “view” permission. Also, although Drupal has hook_node_access()—a procedural calculation of permissions for a node (like an Elgg entity) already in memory—I’m focusing on the systems that craft SQL conditions to fetch only nodes visible to the user. This is critical to get right in the SQL, because if your access control relies on code, you can never predict the number of queries required to generate a list for browsing. In this area, Drupal’s realms/grants API (hook_node_grants()) is extremely powerful.
Realms and Grants in a Nutshell
At a particular time, a user exists in zero or more “realms”; more or less arbitrary labels which may be based on user attributes, roles, associations, the current system state, time…anything. Each realm has been granted (via DB rows) the permission to view individual nodes. So to query, we build up a user’s list of realms, this is baked into the query, and the DB returns nodes matching at least one realm.
E.g., at 2:30 PM today, an anonymous visitor might be in the realms (public, time_afternoon, season_winter*), whereas Mary, who logged in, might exist in the realms (public, logged_in, user_123, friendedby_345, role_developer, team_A, is_over_30, time_afternoon, season_winter). So Mary will likely see more nodes because her queries provide more opportunity to match grant rows. *Note these realms are made up examples.
Clearly this is very expressive, but Drupal (maybe for better) doesn’t provide many features out-of-the-box, so (maybe for worse) doesn’t build in many realms; the API is mostly a framework for implementing an access control system on top of added features. Contrib modules appear up to the task of providing realms based on all kinds of things (groups, taxonomies, associations with particular nodes), but it’s hard to collaboratively build an access control system, so these modules apparently don’t work well with each other and non-access modules must be careful to tap into the appropriate systems to keep nodes protected.
(Implementation oddities: The grants are done in the node_access table, which probably should’ve been called “node_grants”, especially because this table is only somewhat related to the hook called “node_access“. Less seriously—depending on your system size—each realm name (VARCHAR) is duplicated for every node/realm combination, so there’s some opportunity for normalization.)
If you squint, Elgg’s system is a bit similar. Each entity has an “access level” (a realm), with values like “public”, “logged in”, “private”, “friends”, or values representing access control lists (a group or a subset of your friends like a Google+ circle).
That an entity can have only one realm is of course the biggest (and most painful) difference, but also the implementation is significantly complicated by some realms needing to map to different tables. E.g. Elgg has to ensure “friends” maps to rows in an entities relationship table based upon the owner of the entity, while also mapping to the ACL table.
I imagine a lot of these differences come from Drupal being old as time with a lot bigger API reboots, and because Elgg’s access system was targeted to meet the needs of features like friends and user groups, which were built-in from the beginning. It’s hard to predict which schema results in faster queries, and will depend on the use case, but Drupal queries I would guess are easier to generate and safer to alter.
I think in the long run Elgg would be wise to adopt a realms/grants schema, though I would probably suggest normalizing with a separate “realms” table to hold the name and other useful bits. Elgg group ACLs and friend collections would map directly into realms, but friend relationships would need to be duplicated into realms just like groups have an ACL distinct from the membership relationship. Really I think a grants table could completely replace Elgg’s “entity_relationships” table, since both tables just map one entity to others with a name.
As for Drupal, I think the docs could more clearly describe realms and grants (unless I’ve totally got this wrong). I’m less sure of the quality of the API that populates/maintains the tables; it looks like the hooks are pretty low-level ways of asking “would you like to dump some rows into node_access?” and it’s not clear how much of the table must be rebuilt or how often this happens.
A great way to make your pull requests easier to review is to reduce each commit to a particular purpose/functional change. In practice this often means not combining multiple feature additions in a single commit, or not including whitespace changes in a commit that makes some code change.
git add is just not great for anything but very trivial changes, so I use GitX (the active fork) when I’m on OSX.
Getting to the UI is a quick
gitx in terminal and command+2 to switch to the Stage view. Here you can move changes in/out of staging via drag/drop files or by clicking on individual lines in diff views. This lets you run wild and loose during development then, with a scalpel, carefully craft your changes into a logical set of cleaner commits. E.g. if you’ve altering code, unstage all your unrelated whitespace changes and put them in the last commit, or just revert them or
git stash to move them to another branch.
Experts can give you list of reasons why including target=”_blank” on links is bad for UX/accessibility and they’re mostly all right, but they tend to ignore the 5B pound gorilla in the room: Social media sites + Gmail all open external links in new windows and users (including saavy users who understand middle-click etc.) expect them to, and a huge part of UX is doing what the user expects.
My hypothesis is that users see some sites as applications (especially those that have popular app versions) that they would not generally close just to read a story. This change is also surely influenced by the fact that there is no instant way to context click on touch devices as there is with a mouse.
My point is we need to study this phenomenon with real users, who are rapidly moving to touch devices, and not let our preferences and value judgments, formed over years of desktop PC browsing, take over.
We must be willing to admit that in some scenarios the game has changed and we no longer know what is “best”. And this could be a case where what is best for UX is not best for accessibility or for promoting the understanding of browser technology. It would not be the first time.
Kristen Schall plays an adorable tourist in a film shot in soft-focus Europe in the early 1900s. In a stunning/terrifying scene she rides a rickety ski lift contraption hundreds of feet up a mountain, with the steep path twisting and turning to follow a busy street that wraps up the mountainside.
Later: I’m in an abandoned storage unit and find an amazing 80’s drum machine made by a kitchen appliance manufacturer. It’s all black and grey plastic, has tons of knobs, most of the labels are worn off, and I need to get my hands on batteries and a cassette 4-track ASAP.
I like to make small alterations to standard tuning that allow smaller intervals in the lower register.
Am9 (detune the D to C)
Some chords (note the 3rd intervals between the A and C strings):
A 5-4-4-6-x-x (or just 5-x-4-6-x-x)
A7 5-7-7-6-5-5 (standard E bar shape!)
Emaj9 0-9-8-8-0-0 (nice 2nd interval between the F# and G#)
D6add9 (detune the G to F#)
This gives you an open major 3rd interval and an easy D6add9 in the upper 5 strings. Some chords:
D x-0-0-0-3-5 OR x-0-0-0-7-x
With the recent discussions of hiking the federal minimum wage, I came across a plan that might solve three MW-related problems:
- The MW is rarely enough to get by on.
- The MW eliminates (from the regulated market at least) all positions that create less value than the MW. Do you have a great idea for a job that someone might happily do for $7.00/hour? Sorry, that job can’t exist legally.
- Inexperienced workers or those with criminal records present more risks to employers, and the MW makes it impossible to offset that risk by reducing the starting wage a bit. This makes it hard for these folks to get a foot in the door.
Because of problem 1 (and because we make a value judgment on the individual and project that value onto our expectations of wages), we tend to only push the MW up, which just exacerbates problems 2 and 3. Then we paper over those failures by paying the unemployed to remain idle, which is bad for their health, their skills, and their community’s productivity. So if there never were a MW, it would sure seem like a bad idea compared to Morgan Warstler’s plan of just having the government pay the difference between the market wage and what society feels individuals need to get by on ($280/wk in his outline).
A particular flavor of wage subsidy, this plan would set up a second labor market where qualifying employers (almost any small business) would have access to workers for as low as $40/wk—low enough to create almost infinite demand for labor—but these employees would take home at least $280/wk (the “Guaranteed Income”), with the government picking up the difference.
The exciting thing here is this could instantly produce almost full employment, giving workers lots of choice of jobs and forcing employers to compete for even low-wage workers in pay and work conditions. The evidence seems to suggest that Germany’s wage subsidies (in the form of shorter work weeks) allowed Germany to have one of the lowest unemployment rates of the OECD countries during the recession and it probably cost less than the equivalent unemployment benefits, too. It certainly reduced the extremely damaging effects that unemployment has on individuals and families.
Warstler also suggests having employees and employers use an eBay-like ratings platform to improve information flow (the value of employees, the conditions/benefits of jobs, bad behavior of parties) within the market. This seems like a good idea, and sites like Glassdoor show there’s demand for it at the higher end of the payscale, but I have some doubts that it will make such a huge difference; I think it will still be weird and a bit dangerous to your working relationship to publicly rate your boss. Anyway, I don’t see why we can’t roll this out independently of GI.
Ultimately I think this plan—and all the other wage subsidy schemes—sound much better for workers than the current “if-you-can-find-work-at” minimum wage system, which seems as hopelessly flawed as every other attempt to artificially dictate prices.
A few other concerns with the plan:
- Not all employers qualify for GI workers and I think the question of which can/can’t will be difficult to pin down. E.g. If I hire my neighbor to do all my chores and vice versa, we each end up pocketing $240/wk in subsidies. Rooting out these schemes would be tough. Warstler seems to think we don’t necessarily need to and that we might get valuable information from seeing how these pan out.
- GI employers get workers for significantly less than non-GI employers, and I wonder what kind of market distortions that would create. The answer may be none.
- The transition into GI could be chaotic, as overnight there’d be countless jobs to choose from and non-GI employers would have to react to this serious competition. We just have no idea what kind of jobs people will come up with, but every additional job would seem to improve the situation for workers.
- How do GI workers get stable healthcare? Do they get access to a group plan? This could cause problems for workers wishing to move between the GI/non-GI markets.
If we’re stuck with a minimum wage, it would seem best to keep it as low as possible and boost/widen availability of tax credits like the EITC. Essentially this also works as a wage subsidy if by another name.
And can we just get rid of the awful “tipping” system/reduced wage for food workers, as if they’re for some reason especially deserving of having every night’s pay be at the whim of customers and dozens of other elements out of their control.
There’s no perfect way to develop software and use source control because projects, teams, and work environments can vary so much; what works for a small office of employees might not for a loose group of part-time contributors spread across many timezones, as many open source projects are.
Jade Rubick is not a fan of long-running feature branches in git and—if I’m reading this right—argues for merging into master frequently, not waiting for an entire feature to be implemented. This is supposed to force the team to be aware of all code changes occurring.
While lack of communication about features in development can certainly be problematic, I think this is a sledgehammer of a solution. Taking this approach to its extreme, it might make sense to have all developers work huddled together so they can say what they’re working on in real-time, or all work on one workstation. My point is that using a workflow with high costs to address a communication deficit is not so great an idea.
My big fear of this workflow is that it eases the flow of incorrect/unwise code into production. Who reviews this code? What if it takes the whole codebase in a bad direction, but no one at the moment has time to realize that? I think the benefits of feature branches/pull requests are just huge:
- A branch frees the developer to experiment big and take chances without forcing the rest of the team down their path. The value in some big ideas will not be apparent looking at them piecemeal. Some of this work will lead to great things, some will be tossed away, all of it will be good learning.
- Likewise, the PR process can catch incorrect/unwise solutions before they’re merged into the product. This is huge. Some ideas sound great but you only realize 80% into the work that they’re unwise. If that work is sitting in a PR, you just close it and it can live on as a reminder to future devs who get the same idea. If not, you now have the job of shoehorning that code out. On the codebases I work on so many features have been improved/overhauled/abandoned by the review/feedback loop that it seems absolutely crazy to bypass this process. In an async distributed team, I think the PR is basically the perfect code review tool.
- PRs provide a great historical and educational record of what changes are involved in providing a certain feature, what all files are involved, etc. I’ve found that reading pull requests and merge diffs to be just as illustrative as reading source code. If a feature required changes in dozens of files over three weeks, how will I ever piece together the 6 commits out of 100 that were important?
- Feature branches make it a lot easier to revert a feature or apply it to another branch. For life on the edge I’ve built upon versions of frameworks with experimental branches merged in. If I regret this I can always generate a revert commit to sync back up with a stable branch.
All workflows have costs/benefits, I just think that the benefits of not merging feature branches until they’re really ready are huge compared with the costs Jade described.
My hunch is there must be better ways to keep a team aware of other work being done on feature branches. E.g. Make pull requests as soon as the feature branch is created and push to it as you work. That way team members can set aside time to check in on pull requests in progress and provide feedback.
I agree with Jade that feature flags can be a great idea, but that’s mostly orthogonal to source control workflow.
The word needs to be retired. It gets evoked by some to mean a type of dog eat dog libertarianism and by others as a shorthand for the abuses of corporations and unjust outcomes of markets.
I see the idea as the simple preference that people be free to participate in voluntary markets, not be forced into labor, and be allowed to own property and build wealth. This is real basic stuff that almost everyone agrees on because history showed all the competing ideas led to mass poverty and starvation.
The real important questions of today are how a government deals with the fact that people don’t live in bubbles, they don’t agree on things, they can become physically or mentally ill, they can misbehave under the influence of substances, they can make poor decisions and have bad luck, they take advantage of others, they can use their wealth to buy advantages and monopolies, and they can be born into bad situations by no fault of their own.