Hasin HayderPassing data from PHP to the enqueued scripts in WordPress (30.9.2014, 17:16 UTC)
Link
Paul M. JonesAction-Domain-Responder and the “Domain Payload” Pattern (30.9.2014, 17:16 UTC)

tl;dr: Instead of inspecting a Domain result to determine how to present it, consider using a Domain Payload Object to wrap the results of Domain interaction and simulataneously indicate the status of the attempted interaction. The Domain already knows what the results mean; let it provide that information explicitly instead of attempting to re-discover it at presentation time.

In Action-Domain-Responder the Action passes input to the Domain layer, which then returns some data for the Action to pass to the Responder. In simple scenarios, it might be enough for the Responder to inspect the data to determine how it should present that data. In more complex scenarios, though, it would make more sense for the Domain to pass back the data in a way that indicates the status of the data. Instead of the Responder inspecting the Domain results, the Domain should tell us what kind of results they are.

For example, let’s look at an example of some older code to update a Blog post. This is MVC-ish code, not ADR code; we’ll refactor along the way.

<?php
class BlogController
{
    // POST /blog/{id}
    public function update($id)
    {
        $blog = $this->model->fetch($id);
        if (! $blog) {
            // 404 Not Found
            // (no blog entry with that ID)
            $this->response->status->set(404);
            $this->view->setData(array('id' => $id));
            $content = $this->view->render('not-found');
            $this->response->body->setContent($content);
            return;
        }

        $data = $this->request->post->get('blog');
        if (! $blog->update($data)) {
            // update failure, but why?
            if (! $blog->isValid()) {
                // 422 Unprocessable Entity
                // (not valid)
                $this->response->status->set(422);
                $this->view->setData(array('blog' => $blog));
                $content = $this->view->render('update');
                $this->response->body->setContent($content);
                return;
            } else {
                // 500 Server Error
                // (i.e., valid data, but update failed for some other reason)
                $this->response->status->set(500);
                return;
            }
        }

        // 200 OK
        // (i.e., the update worked)
        $this->response->status->set(200);
        $this->view->setData(array('blog' => $blog));
        $content = $this->view->render('update');
        $this->response->body->setContent($content);
    }
}
?>

We can see that there is some amount of model work going on here (look for a blog post, attempt to update it if it exists, check for error conditions on the update attempt). There is also some amount of presentation work going on; remember, the view is not the template – the view is the response. So, even though the view templates are separated, the HTTP status codes are also part of the presentation, meaning that there is an insuffcient level of separation of concerns.

In converting this to Action-Domain-Responder, we can pretty easily extract the model work to a Domain, and the presentation work to a Responder, resulting in something like the following. (Note that the Domain layer now adds values to the returned $blog entity to indicate different failure states.)

<?php
class BlogUpdateAction
{
    // POST /blog/{id}
    public function __invoke($id)
    {
        $data = $this->request->post->get('blog');
        $blog = $this->domain->update($id, $data);
        $this->responder->setData('blog' => $blog);
        $this->responder->__invoke();
    }
}

class BlogUpdateResponder
{
    public function __invoke()
    {
        if (! $blog) {
            // 404 Not Found
            // (no blog entry with that ID)
            $this->response->setStatus(404);
            $this->view->setData(array('id' => $id));
            $content = $this->view->render('not-found');
            $this->response->body->setContent($content);
            return;
        }

        if ($blog->updateFailed()) {
            // 500 Server Error
            // (i.e., valid data, but update failed for some other reason)
            $this->response->status->set(500);
            return;
        }

        if (! $blog->isValid()) {
            // 422 Unprocessable Entity
            // (invalid data submitted)
            $this->response->setStatus(422);
            $this->view->setData(array('blog' => $blog));
            $content = $this->view->render('update');
            $this

Truncated by Planet PHP, read more at the original (another 3442 bytes)

Link
SitePoint PHPInteractive PHP Debugging with PsySH (29.9.2014, 16:00 UTC)

It’s 1:00 a.m., the deadline for your web application’s delivery is in 8 hours… and it’s not working. As you try to figure out what’s going on, you fill your code with var_dump() and die() everywhere to see where the bug is…

You are annoyed. Every time you want to try out a return value or variable assignment, you have to change your source code, execute your application, and see the results… In the end, you are unsure of whether or not you’ve removed all those var_dumps from the code. Is this situation familiar to you?

PsySH to the rescue

PsySH is a Read-Eval-Print Loop (or REPL). You may have used a REPL before via your browser’s javascript console. If you have, you know that it possesses a lot of power and can be useful while debugging your JS code.

Talking about PHP, you might have used PHP’s interactive console (php -a) before. There, you can write some code and the console will execute it as soon as you press enter:

$ php -a
Interactive shell

php > $a = 'Hello world!';
php > echo $a;
Hello world!
php >         

Unfortunately, the interactive shell is not a REPL since it lacks the “P” (print). I had to execute an echo statement to see the contents of $a. In a true REPL, we would have seen it immediately after assigning the value to it.

Continue reading %Interactive PHP Debugging with PsySH%

Link
Piotr PasichRabbit behind the scenes (28.9.2014, 06:38 UTC)
rabbit

Queuing in the background – getting started with RabbitMQ message broker

In PHP business logic is usually put right in action’s method or just behind it. Hence, every little piece of delaying and long-running code will be processed with a request. The problem is almost undetectable if a user sends an e-mail but with more complex actions it may take a little bit longer than preferred.

An example with handling e-mails is the first to come to mind. But let’s imagine that an application needs to compress an image or to compress batch import images. It will take a way too much time until the page will return any response. Probably, most developers will create a simple “queue” table and use SELECT * FROM queue WHERE done = 0 query. Unfortunately, databases are not designed to handle huge traffic so this solution might work only in case of tiny transfer rate, where – to be honest – using queues is unnecessary. In any other situation it will only cause deadlock or stuck and lead to further problems related to traffic growth. Of course, it is possible to write a custom solution to queue the import bulk and render images one by one and call it in the background right after the request, but it is unreasonable effort considering availability of ready made products.

Besides, would custom made solutions be prepared for scalability, cloud systems or speed performance out of the box? I believe they might be, but certainly not in a couple of hours – after this time off the shelf solutions generally will be inefficient and poorly prepared for development and reusability in the future. Thus, we can claim that this method is not a perfect problem fixer, so we should find another way to cope with the queuing.

As you presume, there is an effective solution that mitigates the impact of above mentioned problem – usage of a ready made message broker. Basically, the message broker receives the message with a new database record with every request but it will not cause a deadlock, because the server asks the message broker for data only when has resources and capacity.

In this article I would like to make an attempt to present a solution to the very annoying everyday problem that probably many programmers came across in their organisations – deadlocks in databases caused by a vast number of requests in relatively short time. The main aim of this text is to introduce RabbitMQ, which I value as a very functional and practical message broker, to help you solve the queuing problems and decrease the amount of work you would otherwise have to spend on it.

Why do we need a message broker?

Queuing brokers are useful for several reasons. Firstly, a queue between an application and the web service will reduce coupling, because both need queue’s configuration parameters and its name. It is more likely and convenient to move one system to different server or environment than to move whole queue management system. Secondly, queues work as a buffer – if the main application receives a lot of requests coming in in a short time it certainly will get stuck. The accounting system is able to process them all anyway, but using brokers prevents deadlocks – the broker will handle all requests which cannot be processed on time by the main application. Moreover, queuing is a smart bridge between applications written in different technologies and languages. A software does not need to know anything about receiver, it only sends a message to broker which handles the reports and distributes them to external programs.

How to choose the perfect broker?

Likewise choosing a framework or even a programming language, a few tools purposed for queuing tasks in the background are available. Similarly, according to the t-shape principle (suggesting to gain a wide spectrum of skills with perfectly mastered one) you should have basic knowledge about most of the queuing tools but dig into only one. Each one is different and handles various problems and it is really important to be aware of pros and cons of every alternative.

Because of this great number of choices, to narrow users’ options there is a website http://queues.io/ that collects and compares queues libraries, provides short descriptions of advantages and weaknesses and as a result helps to select the right tool.

I advise to make the choice between one of these: RabbitMQ (http://www.rabbitmq.com), Gearman (http://gearman.org/), ActiveMQ (http://activemq.apache.org/), Apollo (http://activemq.apache.org/apollo/), laravel (

Truncated by Planet PHP, read more at the original (another 51441 bytes)

Link
SitePoint PHPA Year of SitePoint (27.9.2014, 18:00 UTC)

Today marks the one year anniversary of my editing job at SitePoint.

It sure went by fast - in fact, I was genuinely surprised when LinkedIn “likes” started popping up. In this short post, I’d like to take a moment to reflect on the past 365 days, and share with you what I’ve learned and how things changed.

Chaos in the Old World

When I first took the job and stepped into Tim’s shoes, I’m not going to lie - I was overwhelmed. With a big and chaotic author roster, I had very little time to get familiar with the writers, the approach to accepting drafts, the editing process and the aspect of communication with both HQ and the author pool.

Vagrant still not having been a tool as popular as it is and Docker basically not existing yet made things more difficult to test - every code sample came with its own prerequisites, and it was more work than my home Linux playbox could handle, sometimes reinstalling the PHP development environment several times per day. My machine looked messier every day.

Continue reading %A Year of SitePoint%

Link
Peter PetermannComposer & virtual packages (27.9.2014, 13:24 UTC)

Composer & virtual packages

Composer has been a blessing for the PHP community, and many many people use it today. However most people don’t know all it can do – i for certain every now and then learn something new. A few days ago i stumbled over a “virtual package” on packagist – and found it to be a feature that i was actually missing in composer. Turns out, composer can do it, its just not so well documented.

So what is this about? Virtual packages allow you to have a more loose dependency. Rather than depending on a specific package, you depend on a virtual one, which can be fulfilled by all packages that provide the virtual one. Sounds confusing? Lets have a look at an example.

Lets assume you are building a library, lets call it example/mylib, and this library makes use of a PSR-3 compatible logger. Now obviously your package will depend on “psr/log” in version 1.0.0, so you have the psr/log interfaces available. Now you don’t want to depend on a specific implementation, because you don’t want a user of your lib to be forced to use that one. But since you are building your lib so it needs a log provider injected (and if it is a NullLog), you want your dependencies to reflect that. So, what you do is: you require “psr/log-implementation” in version 1.0.0, just the same way you would require a regular package. “psr/log-implementation” doesn’t exist as a package though, but there are several packages which provide this virtual package so if someone who depends on your library depends on one of those as well, all of his depedencies will be met.

How does a library provide a virtual package? Simple, by using the provide keyword in its composer.json.

Lets look at a hand full of composer.json examples:

Example 1: example/mylib composer.json – our own project that requires a logger

{
  "name": "example/mylib",
  "description": "...",
  ...
  "require": {
    "psr/log": "1.0.0",
    "psr/log-implementation": "1.0.0"
  }
  ...
}

In Example 1 we define our example/mylib to require the (existing) package “psr/log”, and a virtual package “psr/log-implementation”, where the way how require those is exactly the same. To be used our lib now requires that a “psr/log” 1.0.0 and a “psr/log-implementation” 1.0.0 package are available when we install it in any application.

Example 2: somelog/logger composer.json – a random psr/log implementation

{
  "name": "somelog/logger",
  "description": "...",
  ...
  "require": {
    "psr/log": "1.0.0"
  }
  "provide": {
    "psr/log-implementation": "1.0.0"
  }
  ...
}

In Example 2 we have a random psr-compliant logger, and it defines that it provides a “psr/log-implementation” 1.0.0 package (besides providing “somelog/logger”), so if we require this package anywhere, we automatically get an “psr/log-implementation” on 1.0.0 requirement fulfilled. Packages can also have more than one virtual package that they fulfill in their “provide” section, allowing one package to fulfill more than one requirement.

Example 3: myapp/myapp composer.json – an application using our library

{
  "name": "myapp/myapp",
  "description": "...",
  ...
  "require": {
    "example/mylib": "1.0.0",
    "somelog/logger": "1.0.0"
  }
  ...
}

Example 3 shows an application requiring the library and fulfilling the libraries derived requirement for a psr/log-implementation by requiring “somelog/logger”.

This whole thing can do a few very nifty tricks, where you can have the users of your lib customize their stack and still work with your library – basically you raise the interoperability while still hinting on what you need. I wish more projects would make use of this.


Tagged: composer, php, virtual package
Link
Matthias NobackBackwards compatible bundle releases (27.9.2014, 11:43 UTC)

The problem

With a new bundle release you may want to rename services or parameters, make a service private, change some constructor arguments, change the structure of the bundle configuration, etc. Some of these changes may acually be backwards incompatible changes for the users of that bundle. Luckily, the Symfony DependenyInjection component and Config component both provide you with some options to prevent such backwards compatibility (BC) breaks. If you want to know more about backwards compatibility and bundle versioning, please read my previous article on this subject.

This article gives you an overview of the things you can do to prevent BC breaks between releases of your bundles.

Renaming things

The bundle itself

You can't, without introducing a BC break.

// don't change the name
class MyOldBundle extends Bundle
{
}

The container extension alias

You can't, without introducing a BC break.

use Symfony\Component\HttpKernel\DependencyInjection\Extension;

class MyBundleExtension extends Extension
{
    ...

    public function getAlias()
    {
        // don't do this
        return 'new_name';
    }
}

Parameters

If you want to change the name of a parameter, just copy the parameter instead of actually renaming it:

parameters:
    old_parameter: same_value
    new_parameter: same_value

class MyBundleExtension extends Extension
{
    public function load(array $config, ContainerBuilder $container)
    {
        $container->setParameter('old_parameter', 'same_value');
        $container->setParameter('new_parameter', 'same_value');
    }
}

Config keys

The container extension alias can not be changed without causing a BC break, but it is possible to rename config keys. Just make sure you fix the structure of any existing user configuration values before processing the configuration:

class Configuration implements ConfigurationInterface
{
    public function getConfigTreeBuilder()
    {
        $treeBuilder = new TreeBuilder();
        $rootNode = $treeBuilder->root('my');

        $rootNode
            ->beforeNormalization()
                ->always(function(array $v) {
                    if (isset($v['old_name'])) {
                        // move existing values to the right key
                        $v['new_name'] = $v['old_name'];

                        // remove invalid key
                        unset($v['old_name']);
                    }

                    return $v;
                })
            ->end()
            ->children()
                ->scalarNode('new_name')->end()
            ->end()
        ;
    }
}

Service definitions

If you want to rename a service definition, add an alias with the name of the old service:

services:
    new_service_name:
        ...

    old_service_name:
        alias: new_service_name

This may cause a problem in user code, because during the container build phase they may call $container->getDefinition('old_service_name') which will cause an error if old_service_name is not an actual service definition, but an alias. This is why user code should always use $container->findDefintion($id), which resolves aliases to their actual service definitions.

Method names for setter injection

services:
    subject_service:
        calls:
            # we want to change the name of the method: addListener
            - [addListener, [@listener1]]
            - [addListener, [@listener2]]

This one is actually in the grey area between the bundle and the library. You can only change the method names used for setter injection if users aren't supposed to call those methods themselves. This should actually never be the case. You should just offer an extension point that takes away the need for users to call those methods. The best option is to create a compiler pass for that:

namespace MyBundle\DependencyInjection\Compiler;

use Symfony\Component\DependencyInjection\ContainerBuilder;

class InjectListenersPass implements CompilerPassInterface
{
    public function process(ContainerBuilder $container)
    {
        $subjectDefinition = $container->findDefintion('subject');

        foreach ($container->findTaggedServiceIds('listener') as $serviceId => $tags) {
            $subjectDefinition->addMethodCall(
                'addListener',
                array(new Reference($serviceId))
            );
        }
    }
}

Truncated by Planet PHP, read more at the original (another 4278 bytes)

Link
SitePoint PHPHow to Install Custom PHP Extensions on Heroku (26.9.2014, 16:00 UTC)

In this tutorial, we’ll learn how to install custom extensions on Heroku. Specifically, we’ll be installing Phalcon.

Sign-up and Set up

In order to use Heroku, you must sign up for a Heroku account. Heroku generally works with a command-line interface. To use that interface, you need to install the Heroku toolbelt package for your operating system. If you are using Linux, open up terminal and type the following command.

wget -qO- https://toolbelt.heroku.com/install.sh | sh

After installing the toolbelt, you’ll have access to the heroku command from your command shell. Authenticate using the email address and password you used when creating your Heroku account:

heroku login


Enter your Heroku credentials.
Email: fcopensuse@gmail.com
Password:
Could not find an existing public key.
Would you like to generate one? [Yn]
Generating new SSH public key.
Uploading ssh public key /home/duythien/.ssh/id_rsa.pub

Press enter at the prompt to upload your existing ssh key or create a new one, used for pushing code later on.

Phalcon is a third party extension, and thus not bundled with PHP. Phalcon requires the following components:

Continue reading %How to Install Custom PHP Extensions on Heroku%

Link
Michael KimsalDealing with an incorrect security report (26.9.2014, 06:53 UTC)

A colleague of mine has had his company’s code “security audited” by one of their clients; specifically, the client hired a firm to do security testing for many (all?) of that company’s services, and my colleague’s company was one of the services.

They’re told they’re in danger of losing the account unless all of the “security holes” are patched. The problem is, some of the things that are being reported don’t seem to be security holes, but their automated scanner is saying that they are, and people can’t understand the difference.

Here’s an example – you tell me if this is crazy or not.

For URL:
hostedapp.com/application/invite/?app_id="><script>alert(1407385165.6523)</script>&id=104

Output contains:
<input type="hidden" name="app_id" id="app_id" value=""><script>alert(1407385165.6523)</script>">

Report coming back is:
Cross-Site Scripting vulnerability found
Injected item: GET: app_id
Injection value: "><sCrIpT>alert(14073726.2017)</ScRiPt><input "
Detection value: 14073726.2017
This is a reflected XSS vulnerability, detected in an alert that was an immediate response
to the injection.

When you pass in a value, it is escaped; there is no alert box that pops up (in any browser at all). I’m *thinking* that the reporting tool is simply seeing that the number exists on the resulting page (“detection value”) and is flagging this as “bad”. That seems way too naive for a security tool, though.

Is there some other explanation for why a security tool would look at this and still report that this was ‘insecure’?


I'm currently working on a book for web freelancers, covering everything you need to know to get started or just get better. Want to stay updated? Sign up for my mailing list to get updates when the book is ready to be released!

Web Developer Freelancing Handbook

Link
Matthias NobackExposing resources: from Symfony bundles to packages (25.9.2014, 22:00 UTC)

Syfony bundles: providing services and exposing resources

When you look at the source code of the Symfony framework, it becomes clear that bundles play two distinct and very different roles: in the first place a bundle is a service container extension: it offers ways to add, modify or remove service definitions and parameters, optionally by means of bundle configuration. This role is represented by the following methods of BundleInterface:

namespace Symfony\Component\HttpKernel\Bundle;

interface BundleInterface extends ContainerAwareInterface
{
    /** Boots the Bundle. */
    public function boot();

    /** Shutdowns the Bundle. */
    public function shutdown();

    /** Builds the bundle. */
    public function build(ContainerBuilder $container);

    /** Returns the container extension that should be implicitly loaded. */
    public function getContainerExtension();

    ...
}

The second role of a bundle is that of a resource provider. When a bundle is registered in the application kernel, it automatically starts to expose all kinds of resources to the application. Think of routing files, controllers, entities, templates, translation files, etc.

The "resource-providing" role of bundles is represented by the following methods of BundleInterface:

interface BundleInterface extends ContainerAwareInterface
{
    ...

    /** Returns the bundle name that this bundle overrides. */
    public function getParent();

    /** Returns the bundle name (the class short name). */
    public function getName();

    /** Gets the Bundle namespace. */
    public function getNamespace();

    /** Gets the Bundle directory path. */
    public function getPath();
}

As far as I know, only getName() serves both purposes, since it is also used to calculate the configuration key that is used for the bundle's configuration (e.g. the configuration for the DoctrineBundle is to be found under the doctrine key in config.yml).

There are several framework classes that use the bundle name and its root directory (which is returned by the bundle's getPath() method) to locate resources in a bundle. For instance the ControllerResolver from the FrameworkBundle allows you to use the Bundle:Controller:action notation to point to methods of controller classes in the Controller directory of your bundle. And the TemplateNameParser from the FrameworkBundle resolves shorthand notation of templates (e.g. Bundle:Controller:action.html.twig) to their actual locations.

It's actually quite a clever idea to use the location of the bundle class as the root directory for the resources which a bundle exposes. By doing so, it doesn't matter anymore whether a bundle is part of a package that is installed in the vendor directory using Composer, or if it's part of your project's source code in the src directory; the actual location of resources is always derived based on the location of the bundle class itself.

Towards a better situation for resources

Let me first say, I think the BundleInterface ought to be separated in two different "role interfaces", based on the different roles they play. I was thinking of ProvidesServices and a ExposesResources interface. That would clearly communicate the two different things that a bundle can do for you.

Puli: uniform resource location

Much more important than splitting the BundleInterface is to have a better way of exposing resources located inside a library (or bundle, it wouldn't make a difference actually). This is something Bernhard Schüssek has been working on. He has created a nice library called Puli. It basically provides a way to locate and discover resources from all parts of the application, be it your own project or a package that you pulled in using Composer.

The core class of Puli is the ResourceRepository. It works like a registry of resource locations. For every resource or collection of resources that you want to expose, you call the add() method and provide as a first argument a prefix, and as a second argument an absolute path to a directory:

use Webmozart\Puli\Repository\ResourceRepository;

$repo = new ResourceRepository();
$repo->add('/matthias/package-name/templates', '/path/to/Resources/views');

Now if you ask the repository to get the absolute path of a particular resource, you can do it by using the prefix you just

Truncated by Planet PHP, read more at the original (another 4623 bytes)

Link
LinksRSS 0.92   RDF 1.
Atom Feed   100% Popoon
PHP5 powered   PEAR
ButtonsPlanet PHP   Planet PHP
Planet PHP