Monday, February 15, 2016

Last day of ETC2016

(no Hebrew, it will come later, I hope). (Forget it, it's too long, and all Hebrew readers that might care can deal with this text in English)

My second day at ETC2016, was just as amazing as the first day, or to be honest - it was even better. After going to sleep a bit late and getting up a bit late, I got to the conference just in time for the opening keynote, where I was quite happy to learn that I still have 10 more minutes, as everyone just assumed the day will start at 9:00 and not in 8:50 (it does make sense, doesn't it? Even if the schedule says otherwise), I used those ten extra minutes to grab a cup of tea and relax a bit.
The first keynote, by Chris Matts was dubbed "We don't need testers! what we really need is testers!", and it was really fun to hear - since Chris is a very skilled speaker (I will call anyone that can get a morning-dazed group of several dozen people to participate, move and laugh a good speaker). As the talk went on, we've heard about several subjects that seemed interesting at the time such as the cynefin framework, a seemingly nice idea titled "information arrival" that was used to examine the flow of a feature - from the initial requirement to deployment, and noticing that testing had that "looping" flow with development (apparently, we don't like loops), an interesting advice for those of us using Given-When-Then: start by writing the "then" part, and move backwards. This way you will lose less use cases as you won't be blinded by the fact that you have created a scenario resulting in the desired feature. For instance, lets look at the following use case:
  • Given that a Gmail user is logged in, When the user clicks log-out, Then the user will be logged out.
Nice, isn't it? We have a test for the logout function.
But what about when a logged in user deletes their cookies? How many of you think you would have thought about it when GWT scenarios for the logout mechanism?  Some, maybe. But what if you have asked "what could cause a Gmail account to be logged out?" I certainly believe that much more people would have thought about this case.

Finally, came the point I found myself objecting the most. We had a short reminder about the 3 amigos approach to feature\requirement design, and noted that today those 3 amigos are a bit more like 6 amigos, as we have the product manager, the tester and the developer, but sometimes also the analyst, the architect, the UX researcher and UI designer, maybe someone from sales. Having that many people in the discussion means it needs to be moderated, and at this point we just went over each role and its stereotypical character flaws in order to say "they can't moderate" leaving us with which option? Naturally, our savior the tester!!  Now, don't get me wrong  - it's always nice to hear outright flattery. If someone wants to call testers supermen, I won't be the one standing in their way. But the loops we jumped through to get to this conclusion seemed to contradict my experiences, and to ignore the fact that we could have disqualified the tester from moderating using the exact same methods used to ridicule the PM and the developer. Besides that, I am still wondering what did Chris tried to convey in this talk. I think I have missed the message.

Anyway, next was a talk by Maaret Pyhäjärvi, that was intended to expose the audience to two things - How does an expert tester approach a new software, and the idea of strong style pairing. I can't really comment about the first part, as I volunteered to be the driver for this demonstration, and this took most of my attention and focus.
However, I am the only one who can say how was the experience to be the driver in that session (Unless someone could read my mind, in which case I don't want to know). That being the case, I want to share some of that experience.
The first thing I can say is - it is harder than it looks. And it was very intense - Trying at the same time to follow the instructions i was given (using a Mac,which I am not familiar with, added some complexity), trying to remember always to hold back and wait for instructions, and at the same time to try and follow what everyone else are following - the way the speaker approaches a new software. As it became clear very fast that I could not do all of that at the same time, I decided to drop it and try to focus on learning to drive and to be navigated. After a certain point, I've decided that I can't just do exactly what I was told, since focusing on discrete actions made me feel very detached. So I started communicating back - I did some actions that were intended to make me feel a bit more comfortable with the environment - I zoomed in a bit, and I started interpreting orders in a less strict way. For instance, when Maaret asked to see what happens when we click the "play" button (or was it "preview"? that green button that was below) and something started flickering like crazy in my eye I decided to understand that we were trying to examine the deliberate changes we made, instead of inspecting the complete and result, so I slowed the preview to allow everyone to see better the result.
This, by the way, was another thing that added difficulty for me in this situation: In addition to following orders and trying to half-follow a logic I was not familiar with, I also had in mind the fact that I was supposed also to help Maaret delivering her message, and that each action I choose to do, or not to do, will be something that the audience will see, and therefore my performance should be as good as it can be - whatever "good" means in this context I'm not familiar with.  Also I felt that a more experienced driver could take the navigator's instructions, and by the way of following them, provide some sort of feedback and maybe even tune in to the thought process of the navigator. I wonder if this is really the case, or just a feeling I had.
Oh, and in case you are wondering - no. I was not really aware of most of what I wrote here during the talk. I am not (yet) that good at self reflection. The main voice in my head was more like "Oh my, there are all those people here, looking at me. am I doing everything right? can I hide behind the screen a bit more? why am I messing up simple actions? was I instructed to do anything else and missed something?" The long, all aware, version is what I managed to analyze from it when inspecting it in a much calmer environment, where I was able to hear the smaller voices that got drowned by that very anxious one in real-time.
But, all-in-all, I really enjoyed the experience. And all I need to do is to wait for a video that I could watch and fill the parts I missed.

After a short break, I went to the event I had marked up in advance - the security testing mobbing session. Why was it marked up? Because it combined something I am interested in with something I was curious about. I assumed that in terms of security testing, I won't learn anything new, but I still enjoy going over the basics, especially when there are others around that are surprised to learn that most security testing is really very simple. AS for mobbing - I was curious to see a mob in action and try to asses by myself for what I can use it and for what I cannot.
Yes, of course I ended up being part of the mob. How else could I learn? by watching from the side?
The first driver, Abby Bangser surprised me in her first instruction: open chrome devtools. It surprised me because it is such a smart move that for some reason is never my first one - just by having the console open you gain visibility to many things, so unless looking for something very specific, it's good to have it open. The second move got me even less prepared: "in the devtools open the network tab and preserve the log". Again, I can see the reason behind it, or at least, guess  that reason, but in fact, I've never used that part of the devtools and in fact - I'm not even sure what does it do. Really interesting to see someone going to that tool by default. Then we went struggling on  - it seems that half of our mob knew a thing or two about web security testing (or at least, knew how to use fiddler and to inspect a bit under the hood of a web page), while the other half were less comfortable around this area. Since the actual security challenge was not important (go and check the site we've played with and pick a goal at random), I'll focus on the work we did as a mob. Once again, it is that much harder than it seems. letting the navigator decide, letting everyone in the mob to speak, driving according to instructions, bridging over skill gaps - a lot of social skills that were required just to make this format work. Then we encountered a problem of focusing too strongly, so we didn't even see the obvious answer (a cookie with a boolean named isAdmin), we actually needed help from the audience to notice what was staring us in the eyes because our discussion was so much focused on a previous success we had.
We had two rounds with a short time to reflect on our quite poor first performance between them, and In the second round I was much more aware of what I was doing (especially what I was doing wrong), and I think our mob functioned a little bit better. True, we had encountered a technical problem with our test environment, but we tried to tackle it as a mob, and I was quite happy with the progress we made in such a short time. After the mob was done, I had a nice talk with Abby about our shared experience, and about the difficulties and thoughts each of us had about this mob.
My main conclusion from that, besides the realization that mobs require quite a lot of practice before they perform well, is that doing the scary thing and putting myself under the light is really worth it - I think I got more out of both sessions than those that only watched them.

Next, after lunch, came a talk by Claudia Rosu about "A developer experience to testing" which was very interesting. It took me a while to understand that - but this was the first time I saw someone describing the use of test cases as part of a development process. What I mean by saying that is that the way she was describing it, she was approaching the whole idea of writing test cases in a way that is almost the opposite way than the one I'm familiar with. Usually, when I write tests, I try to think "what can go wrong? what might go missing? what should I double-check? Where are the risky areas?" and then I set out to see that those things won't happen, and if happened, that they are fixed. The questions she was asking were with a completely different tone: "What should I be doing? What interfaces I need to create that? Am I understanding my client properly? What are the implications of the feature I'm working on?", "How can I work safely without fear of breaking things unintentionally?". Those questions are the ones that the developers I work with usually try to work out in their design pr the coding phase (which are often combined).
At first, when I understood this, my reaction was "So, she's basically writing test cases without testing", but fortunately, I was nowhere near my computer, so I could think about it some more and see that I am plain wrong. What she was doing was exactly testing, only in a way I was not familiar with - She was learning her product and her customer, and was even oriented towards searching for problems.
I am curious, though, about the difference between these two approaches - which is better for which task. I suspect that "my" approach will be better at spotting the non-functional requirements, as the other approach uses testing as part of the functionality creation process, but this isn't necessarily the case.

After that - open space, where everyone can be a speaker! 30 minutes slots, several different places (somewhere between 4 and 6), and way too many interesting talks at the same time.
I started by going to Gita Malinovska's session where she asked the participants about their test frameworks, and specifically  about mobile application automation tools and frameworks. We had an interesting discussion about the architecture of such a test environment, with a bit about the difference between record &playback tools and what we called "coded" tools (and thanks to the guy from Ranorex that I forgot his name), One thing this short discussion taught me was that there is much confusion in what consists of a test tool, and what belongs to each level of the framework. One of the participants considered IntelliJ as part of their testing framework (and in a manner of speaking, it is), and someone other wasn't sure about the differences between JUnit, appium, BDD-like language, end all of the rest. Maybe it is a point worth clarifying.

After that - I gave a talk. I was so hyped-up the first day that I found myself wanting to contribute something to the party, and the open space gave me an opportunity to do that.
The subject I chose was Threat modeling. My goal in this short talk - to convince everyone that security testing is not scary, and that everyone can, and should do that (this notion, by the way, is taken from a book by Adam Shostack, named "Threat Modeling", more details on this book will come in a future post I've been writing for ages now). We went quickly over the problem of "what is a threat?" and I presented Microsoft STRIDE model (Which I already mentioned here, and remember: there's English below all of those undecipherable Hebrew letters) as a tool to help us think about threats. Now, The best way of showing them that everyone can do threat modeling was to start doing it. right? I thought so also. I still do. I asked from a volunteer to draw an abstraction of his product on the board. Easy, ha? everyone can threat model any system, right? We got a really tough system. At least by the description we had, it was s system passing messages back and forth with very minimal processing, no database, and very simple looking logic. I needed time to think, and I needed ideas. As always, when in trouble - stall. I delegated the thinking to the others and asked them "Do you have all the information you want? What else would you want to know if you were to test this application?". Some very good questions were raised by the other listeners, and what really saved me was Mieke Gevers' question: "are you using HTTP or HTTPS?". I used this question to point that there was a spoofing threat if they are not signing their certificates with a strong enough cryptographic hash algorithm (SHA-1 that was very popular up until recently was declared not secure enough), and once we found that the system has some log files I asked about the potential of Tampering by log injection attacks. I think we ended the session with 3 or four potential threats before our time was up and Anne-Marie Charrett entered the room with her robots.
In retrospect, there are several points I could do better.The first was to focus on the trust boundaries we drew. Another thing I think I should have done is to start looking for threats using the STRIDE types (e.g.: Is there a way to tamper the log files? Is there a spoofing threat between the server and the client? etc.). At any rate, I thought it went pretty well, considering I was winging the talk as it went, and I was very happy to see on the conclusions post-it board a note "Threat modeling is testing" and another one saying that security testing was very nice" (though the latter wasn't necessarily a response to my talk, security testing is indeed very nice).
Anyway, to Anne-Marie and her robots.
She brought two calculator\space vehicles toys and asked us to find out what is the effect of a specific button. So we set up to work, raised some hypotheses, and started working toward disproving some of them and collecting enough data to enable us to make predictions. I had the pleasure of teaming up with Gita Malinovska, Abby Bangser, Dan Billing and I'm certain I'm forgetting at least one or two others, and watching them think was very interesting, as each has their own notions and way of thinking. Besides, we got to play with toys that move and make noise. What could be better?

After all of this rush, there was the final keynote by Erik Talboom who spoke about SoCraTes (Software Craftmanship & testing) and spoke about the journey he made from being a developer who suffered when working with testers, to the point that he took a testing course to understand testers better. The talk was well presented and did great in referencing to other talks in order to connect some of the dots and tie his subject to shared ideas we all heard. One thing he said, though, sounded a bit odd for me (so, armed with my insights from the day, along with my natural chutzpah (By the way, there really should be a word for this in English, spelling Hebrew words in Latin alphabet just feels wrong), I rose up and asked - Erik's suggestion was to abolish the roles, since they impose reasonless limitations about what a person should or shouldn't do. I found it odd, since my experience taught me that roles are a great tool for growth and focus - Just by giving me the title "Security advocate", I became aware and more tuned in to the security issues we had, and quite frankly - Part of the reason I'm able to see the flaws that my developers miss is that someone told me "you are hired as a software tester" (They used QE engineer, but I'm ignoring that since they clearly meant to say tester). When I listened to Erik's response, I realized we were using the same word to indicate different things. For him, roles are what I would call "Job descriptions" - a list of things to do that implies also a list of things that are "not your responsibility". On the other hand, I used roles in a way he would probably have used something like "goals" or something similar - Titles that are used to set some end goal (or multiple end goals), leaving the details of "how" to the individual that was assigned with this goal.
This talk ended, and with it also the conference, and I stayed to chat a bit, and extend the conference as much as I could. At this point, Richard Bradshaw referred me to watch this video that seem to have some very interesting insight about the impact of job titles. I'm not sure how exactly, but suddenly I found myself standing in a room with only four other - Maaret & Llewellyn who were closing up the event, Christina Ohanian who was finalizing this amazing drawing and Tania Rosca who was kind enough to chat with me instead of going home for about 30 minutes.
And then - the conference ended.
For real.
At least, if you don't count the dinner some of the folks gathered to, which was also great. I ended up talking with Erik hörömpöli (that has a series of mini-posts about the conference, here's one of them), and listened to Franzisca Sauerwein speak with some others (Whose names are sadly slipping from my mind). I say "listened" since they were speaking several meters above my head and I was trying to figure out what was going on. By the end I found a quiet moment and asked Franzisca to translate the whole thing for me, which she kindly did (as well as added some references for me for the future).

With that, I end this my story of the 2nd day of the European testing conference 2016 - it was as packed as it probably appears in this lengthy text (thank you for all who stayed until this point), and it was even more fun, I couldn't stop smiling for several hours after everything ended, and that's always a great sign.
Probably there will be a third, shorter post, in which I will try to summarize my impressions of the event as a whole, but we will see about it later.

No comments:

Post a Comment