Rework of OpenID Selector module

Submitted by Barrett on Fri, 03/09/2012 - 17:24
Rework of OpenID Selector module

The openid_selector_inline submodule of OpenID Selector provides interface elements that replace the core OpenID input field with provider icons that users can click to authenticate to their identity provider. The module provides no real configuration options, though. When enabled, it automatically places its icons on the /user/login form and on the login block.

I'm going to update the module to provide an interface form to allow the site admin to turn the module on or off on either of those forms and to manually enter form_ids on which the icons should appear.

Barrett Fri, 03/09/2012 - 17:24

Rules-based Taxonomy Term Creation in Drupal 7

Submitted by Barrett on Fri, 03/02/2012 - 09:04
Rules-based Taxonomy Term Creation in Drupal 7

This morning, I went looking for a means to automatically create a Taxonomy term when a new node of a given type is published. To my surprise, though, the old Taxonomy action set which appeared in the D6 version of Rules was missing from the D7 version of Rules. A quick search of the Rules issue queue revealed that I wasn't the only one looking for Taxonomy actions. Several more minutes of poking around the interface revealed that the solution is there, it's just obscured in the interface.

action type list

The critical bit of information which I had forgotten is that, in Drupal 7, Taxonomy terms are entities, so the Rules interface expects you to create a new term by selecting the "Create a new entity" action then selecting "Taxonomy term" for the entity type. From there, setup proceeds about as you'd expect until you get to the point of selecting the vocabulary in which the new term should appear. In my case, I wanted to create a term in the Projects vocabulary, so I entered the vocabulary's machine name "project" and promptly got an error.

error message

The problem is that the interface expects the vocabulary's vid which, from what I can see, doesn't appear anywhere in the interface. The interface uses the machine name everywhere. The only place I could find the vid was in the taxonomy_vocabulary table in the database.

So, to my mind, there are two non-trivial usability problems with the interface for creating taxonomy terms in Rules. First, you have to know that Taxonomy types are entities. Now, this probably not an unreasonable expectation. Let's face it, if you don't know that they're entities should you really be poking around in Rules? The counter argument, though, is that entities are an abstraction layer to make things easier to work with. I shouldn't need to have to know that terms are entities, that should be handled under the hood. Frankly, I can go either way on that point but it would make things much more accessible if Taxonomy actions appeared in the action list directly. Really, all it would be would be short-cuts to the interface elements and functionality which already exist under the "Create a new entity" tree.

The second point seems like much more of an issue to me. If the interface demands information from me, I should be able to find that information in the interface. If Rules is going to require a vid, then I expect that value to appear somewhere on the vocabulary page. If it doesn't (as is currently the case), then I expect Rules to accept the machine name which does appear on the vocabulary page.

vocabulary edit screen

 

Barrett Fri, 03/02/2012 - 09:04

Indexes on collections that don't exist: one of the mysteries of Mongo

Submitted by Barrett on Sat, 11/19/2011 - 21:14
Indexes on collections that don't exist: one of the mysteries of Mongo

I've been working on a Drupal module which aggregates and reports on some data from Mongo. In order to keep from having to re-do all the aggregations, the aggregated data itself is written off to a Mongo collection which is then retrieved and displayed when the reports are requested. To make the retrieval more efficient, I need an index on the table storing the aggregated data.

In a traditional Drupal environment, this is a simple setup. In the module's .install file, I'd build up the table structure, with the necessary index, of the storage table in a hook_schema() implementation. Then just wire the schema hook into a hook_install() implementation and viola. The problem is that Mongo collections don't exist until you write data into them. So how do you define an index on a table that doesn't exist?

It turns out the answer is freakishly simple: Mongo is perfectly happy to set an index on a field which doesn't exist in a collection which doesn't exist; the collection springs into existence as soon as you declare an index on it. Coming from an RDBMS background, this blew my mind the first time I saw it happen.

The text below shows the call and response from creating a new database, listing the collections in the database (to demonstrate that there are none), building an index on a new collection, and verifying the index.

>use junkdb
switched to db junkdb
>show collections
> db.fnord.ensureIndex({foo:1})
> show collections
fnord
system.indexes
> db.fnord.getIndexes()
[
    {
        "name" : "_id_",
        "ns" : "junkdb.fnord",
        "key" : {
            "_id" : 1
        },
        "v" : 0
    },
    {
        "_id" : ObjectId("4ec86ed7ba553b94e5ae3dbc"),
        "ns" : "junkdb.fnord",
        "key" : {
            "foo" : 1
        },
        "name" : "foo_1",
        "v" : 0
    }
]
> 
Barrett Sat, 11/19/2011 - 21:14

Tags

Mongo MapReduce FTW!

Submitted by Barrett on Tue, 09/27/2011 - 19:53
Mongo MapReduce FTW!

One of the systems I've lately inherited makes heavy use of Mongo for data storage, a data system I've not used previously. So, when the boss called tonight and said that his boss needed counts of an object in our system by state in the next 10 minutes my thinking went something like...

No problem, that's a simple SQL group-by....Oh, wait. This is mongo. Oh, crap! How do I do that?! It's a function; Map...Something.

The function I was looking for was MapReduce. Basically, MapReduce is a function applied to a collection set which itself takes two functions as parameters. The first parameter, the map function, converts every item in the collection to a key-value pair. The second function then reduces (hence the name MapReduce) the set of key-value pairs coming out of the first function to a set of items with key-value pairs representing the distinct values of the key and their counts. It's Mongo's answer to the SQL group-by.

For instance, the functions which solved my problems were:

var map = function() {
  var key = {'state': this.user.state};
  emit(key, {count: 1});
};

var reduce = function(key, values) {
  var sum = 0;
  values.forEach(function(value) {
    sum += value['count'];
  });
  return {count: sum}
};

db.my_collection.mapReduce(map, reduce, {out: {inline:1}});

I haven't worked with it much beyond this immediate usage, but I get the sense that, while it's nowhere near as simple, it's probably significantly more powerful than the SQL group-by clause.

Barrett Tue, 09/27/2011 - 19:53

Tags

A Mnemonic for the DC Metro Red Line

Submitted by Barrett on Thu, 09/08/2011 - 09:44
A Mnemonic for the DC Metro Red Line

ANow that I'm commuting into the city each day, I've been forced to actually learn my way around metro. It didn't take me long to get tired of checking my phone for the station map, so I put together a little mnemonic to help me remember the order of the stations on my route: Bethesda to Union Station.

My best friends are all American,
they go to UDC.
On the weekends, we go to the parks
then circle round to the far north
and catch the metro to the galleries.
Then we make a quick trip to see the judge before joining the union.

Barrett Thu, 09/08/2011 - 09:44

Joining the Cult

Submitted by Barrett on Thu, 08/11/2011 - 08:25
Joining the Cult

Joining the Cult

AdorationOfTheApple

I've joined the Cult of Mac.

For several years now, I've been running Ubuntu Linux on all my personal machines. I loved the freedom of Linux, that I could recompile the kernel to do things exactly as I wanted. I took elitist joy at the puzzled looks on non-tech people's faces when I told them I ran Linux (and then had to explain what Linux, and often an operating system, was). And, to be quite frank, I loved not having to fork over a ton of money for software.

The thing is, while I loved that I could recompile the kernel, I really hated that I so frequently had to. The wireless card in my netbook didn't play nicely with Linux, so each time I updated the kernel had to roll in a code patch and recompile the kernel to get it working. To get my printer working required another recompile. Trying to sync my ipod to my music collection, while it didn't require a recompile of the kernel, required three separate programs one of which had only a command line interface. Don't get me wrong, I really like the command line, but it's just not the right medium for controlling an ipod.

Maybe it's a sign that I'm getting older, but I don't want to invest time and energy into making my computer work any more. I have things I need to accomplish on my computer and I need to know that when I sit down to do them, the computer will cooperate with me. I have neither the time nor the energy for mucking the computer for the sake of mucking with the computer.

So, after a brief flirtation with Windows 7 (I was weak, I admit), I ordered myself a MacBook. I've only had it about a week now, but so far it's everything I hoped it would be. It has a Linux style command line that handles all the command line tools I love, like grep, curl, and tar, and ... stuff just works! To connect to my home wireless-n network using WPA2 security, all I had to do was enter the passphrase. I didn't have to recompile anything or go digging through the forum posts for solutions; it just worked. It even comes with PHP, Python, and Apache pre-installed.

I'll still keep a Linux box around the house for some things, but I don't see myself going back from the Mac anytime soon.

Barrett Thu, 08/11/2011 - 09:25

Barrett Thu, 08/11/2011 - 08:25

How Mozilla is using data to manage their developer community

Submitted by Barrett on Sat, 04/09/2011 - 11:02
How Mozilla is using data to manage their developer community

Developing Community Management Metrics and Tools for Mozilla

 

 

 

I came across the article linked above, discussing the new dashboard Mozilla has developed to help monitor and manage their community and thought it appropriate given my previous post about the need to be able to monitor your community.
Mozilla Dashboard
Mozilla's dashboard is geared around monitoring code commits by users, but the same concepts could be used to track any active user participation (e.g., discussions created, comments posted, links clicked).

Barrett Sat, 04/09/2011 - 11:02

Architecting for Communities: notes on "Drupal Voices 175: Clay Shirky on Social Media Theory and Drupal"

Submitted by Barrett on Fri, 04/08/2011 - 08:30
Architecting for Communities: notes on "Drupal Voices 175: Clay Shirky on Social Media Theory and Drupal"

Drupal Voices 175: Clay Shirky on Social Media Theory and Drupal

In his interview for Lullabot's Drupal Voices series, social media guru Clay Shirky made two points I found especially interesting, both of which have direct impact on how we design social sites.

First, he pointed out that the integration of social functionality is redefining what it means to be a "successful" site. From a social psychology view, communities are dynamic entities, continually evolving as the membership waxes and wanes and the focus of the core membership changes. As such, a site built to support a community element must evolve in parallel with its community. While previously success meant that a site had been well constructed and implemented, now it means that the site is able to continually learn from the evolving community and adapt to support the community's needs. Success is no longer something which is attained but a goal to be continually sought. It also means that a site can really only be considered done when it is being taken down. If the site remains up, it must be continually refactored, expanded, and improved as though it were a living thing itself.

For site design, the implications of this idea are that the site must be architected in such a way that adding or changing features of the site is not prohibitively complex and that the site incorporate monitoring and feedback mechanisms by which the site managers can evaluate the changing needs of the community and receive direct input from the community members as to what they need. In essence, the site must provide a means of determining what needs to change and ensure that those changes can be implemented without necessitating a complete tear-down of the site.

The second point Clay made was that users of a community site are going to be distributed across a range of involvement, from the very highly involved individuals which drive a community to the occasionally involved users who may make one or two posts/commits/etc over the life of the community. Further, the vast majority of contributions will come from a small core of individuals and the bulk of individuals will make only a small number of contributions, as conceptualized by the Long Tail concept or the Pareto aka "80/20" Distribution. Clay was speaking specifically of open-source development projects like the Drupal community, but I believe the same distributions will hold true whether involvement is measured in commits to a code repository or discussion posts.

In this aspect, the goals, then, are to provide tools which enable these high-committing individuals and to simultaneously reduce the barriers to contribution which could daunt the long-tail contributors. For the former group, this could mean opening access to system APIs or providing special "power-user" accounts with more access than is granted to the common user. For the latter group, this could mean enabling anonymous commenting so that users can be involved in the community without registering or ensuring that functionality is clear and accessible with minimal clicks so that users don't get lost or loose interest en route to contributing.

In essence, we must do what we've always known we needed to do: know and understand our users and grow along with them. If we don't, they won't be our users for long.

Barrett Fri, 04/08/2011 - 08:30

Great introductory Agile videos

Submitted by Barrett on Sun, 04/03/2011 - 19:02
Great introductory Agile videos

I came across these great videos giving a simple introduction to what Agile is and how it differs from waterfall. Unfortunately, it doesn't appear their author is making any more in the series.

User Stories from Agile Advocate on Vimeo.

Agile Planning from Agile Advocate on Vimeo.

The Perfect Plan from Agile Advocate on Vimeo.

Agile Stand Ups from Agile Advocate on Vimeo.

Agile Retrospective from Agile Advocate on Vimeo.

Barrett Sun, 04/03/2011 - 19:02

Tags

DrupalCon Chicago wrap-up

Submitted by Barrett on Sat, 03/12/2011 - 06:56
DrupalCon Chicago wrap-up

DrupalCon Chicago is done. Now it's time to unpack, review and condense my notes, and begin to sort out how to integrate everything I learned into my processes going forward. In subsequent posts, I'll expand on each of the points below, but my goal at the moment is to lay out the biggest things I'm bringing back from the con.

 

  • The main theme I'm bringing back, which I expect is going to impact several of the ways in which I work, is that it's time for the Drupal community to grow up. Dries talked about this in his keynote and the concept was repeated in subsequent sessions. Now, I do not mean (and don't think Dries meant), that the community needs to become all serious and buttoned-down. The community has always been pretty easy-going and self-directing and I think that has been to our benefit. What I mean is that we need to become more rigorous in our development processes. Automated testing, continuous integration, and better separation of UI and API were concepts raised in several sessions during the conference. These are the ways in which we need to be growing up; taking our code more seriously, not ourselves.
  • Maestro is a module I came across in a BoF that I'm really excited about. Created by Nextide (who also write the FileDepot module), Maestro is a workflow/BPM engine with a visual workflow editor. While I haven't had an opportunity to fully evaluate the module yet, the feature set which was demonstrated in the BoF paralleled (and in a couple cases, exceeded) the core features available in a high-priced, proprietary system I recently evaluated for work.
  • I'm also really excited about the improvements coming to UberCart in it's new incarnation as the Drupal Commerce module. UberCart has famously lagged behind Drupal Core and required a lot of work-arounds and hacks to customize. Drupal Commerce promises to correct that, doing things the Drupal way and reducing the complexity and hackish-ness of e-commerce on Drupal.
Barrett Sat, 03/12/2011 - 06:56

Tags