AU3431 New Media Journal

This is my Sound for New Media journal where all my posts and project updates will be.

“ENVI” Rm. 306 Showcase  6/13/12

Our sound installation was a huge success in terms of getting people to come and create their own environment with the setup and equipment we provided. Although not enough time was permitted for us to sync up video to make it visually immersive, we still used an array of lighting techniques and programming to give the classroom a feel of what the different environments could be like. I would have liked to edit a unique and distinguishable sound set for each environment as well as capture actual desert ambience with high sandy winds, the faint sound of sizzling, and other minute differences to capture   the audience even more. Perhaps the pedal board could have been dialed up better to affect the samples in a way that was noticeable by most people. I asked a ton of guests how it was for them and most said it was an enjoyable experience. They understood the concept from the get go and said it was setup well to where people could follow the simple logic of hitting buttons to receive both audio/light triggers directly and indirectly. If time permits for some of the class, I would suggest building on our concept to actually immerse participants to create worlds of their own by constant audience participation. I thoroughly enjoyed watching the fascinated faces of us all and thought we did a superb job troubleshooting and making the best out of ENVI.

Project 2 “ENVI” The Environmental Conductor Update 6/6/12

Today (Week 10) will be our dress rehearsal for ENVI. The  first station of the environmental conductor is where audience members will select which environment they want (beach, city, desert, forest, underwater) They will be using 3 iPads (Ambience, SFX 1, SFX 2) to control and affect the sound and run it through 2 separate FX stations (Kaoss Pad and Pedalboard) to further add to the environment being created. The stations will be activated when the audience members step  near them with a motion sensor light. Each station affects the other in some way whether its the FX stations, Microphone station, or Lighting station(s). We are creating a Logic session with 6 outputs and three EXS24 MIDI tracks for the audio segment of this project. The lighting crew is using Junxion and MIDI Pipe to program the lights (Beach=Yellow/Underwater=Blue/Forest=Green/Desert=Red/City=ALL COLORS). As sound director/manager I have gathered the necessary SFX and ambience samples/data to fully test out ENVI in the classroom in conjunction with the lighting.  I am responsible for knowing how different controllers will affect the audio and what speaker technology we will need to execute most efficiently without confusing audience participants.

As of now this is our sound system setup:

4 AV40 speakers as surrounds A/B/C/D

Speakers A and B are 1 pair of AV40s where Speaker A is the powered speaker and Speaker B is the satellite (connected with extensive speaker cable). Likewise, Speakers C and D are a pair with Speaker C being powered and D the satellite.

2 Yamaha MS400 Loudspeakers 15″ w/ horn (on stands) for FX Send E and FX Send F

CAL IT II Article Review 6/6/12

 “To study the role of competing noise and architectural design in patient care, Otto, Edelstein and their colleagues are conducting experiments in Calit2’s StarCAVE virtual reality system. The StarCAVE is a five-sided, immersive 3D environment where scientific models and animations are projected on 360-degree screens surrounding the viewer, and onto the floor below.” The technology being used by the Calit2’s Sonic Arts Research and Development Group is monumental in terms of how communications are still being conducted (email, phone, meetings, conferences). Virtual reality spaces and videoconferencing with their spatialized speaker array sound techniques are critical to creating environments correctly to hone in on the “sweet spot” for everyone and keeping unwanted noise/ambience to a minimum. They spoke of having this kind of technology in cars to have GPS for the driver’s space only and putting music in the other speakers to a point where it does not distract the driver. I believe the best part about this is their initial use of creating safer and efficient hospital buildings that adhere to both staff and patients. It’s amazing to know that 40,000 instances of error occur in hospitals that could ultimately be life-threatening if the environment is yielding a decibel measurement over 85dB. I would like to see how this can be incorporated into movie theaters and concert set-ups. Line-array technology and Surround Sound are great, but limit the “sweet spot” to a very select few of many. If the sound could theoretically be identical for each individual, concerts and movies would be that much more worth going to or paying for. Architectural design for any type of building is possible with system (studios, hospitals, theaters, offices, etc.) I’ll be looking forward to the day when I am using a similar, more advanced virtual reality system that will allow me to connect with my clients faster and more efficiently than ever to produce and share data seamlessly.

“NO TECH” Experiment Review 6/6/12

 Week 8 was Lo-tech week, which meant that all technology (phones, laptops, computers, recorders, etc.) were not allowed for use during class time. We instead used traditional media, print outs of our learning material, and took notes with pencil and paper. In terms of success, I felt the class handled it well and took the time to figure out who needed to do what for Project 2 ENVI. Communication between students and instructor increased significantly considering we had “nothing” to distract us. However, when putting together a project of this proportion that has everything to do with audio/video/lighting technology, we were unable to efficiently test out any bugs that could’ve rather put us ahead of the game. It’s week 10 and we must have it up and running no mistakes to showcase Week 11 as the final. The group quiz we took was also a great way for us to be able to discuss our opinions and come together with the same perspective. I wouldn’t suggest an experiment like this if the class is behind on getting the project together. I haven’t used print outs for studying in such a long time, thus I really had no use for the powerpoint that was given out. Sure, I followed along as we were discussing it, but I haven’t looked at it since. Going back to traditional media is rather interesting when you’re constantly consumed by technology that we use mindlessly throughout the day, especially in the Audio program.

New Media Electronic Concert Program Idea 5/30/12

 A new innovative way to decrease spending on concert programs that lists who is performing, what time the event starts and ends, and other important information is feasible via new technology that wouldn’t necessitate printing. A couple ideas are installing an iPad or some tablet on the backs of the seats before performances so the audience members know what is going on. These tablets can only be accessed during intermission and will be censored by the lighting to be off during performances. You could also have a projector and a white screen that lists the concert information accordingly, but this wouldn’t reach all viewing perspectives especially in a large room. If there was an App to do this, which most people would adhere to with recent use of the popular iPhone and Android, they would be intrusive in terms of being able to surf the web and be distracted by other phone functions. Phones should always be off during concerts unless they’re taking pictures or short clips of video for memory. In a classical music setting, phones are a MAJOR distraction, which is why the iPad that is controlled on the back of seats for each individual to see is a much for feasible idea. Despite the fact that the initial payments would be costly, it still saves money from printing for every single concert (estimated $40,000 a year at UCSD). The point is to keep everyone happy and focused on the performance rather than the internet or what he/she was doing last night.

Project 2 Update 5/23/12 (using same diagram below)

The first station is where audience members will select which environment they want (beach, city, etc.) They will be using hand/foot controllers (iPad, Nano Pad, pedal boards) to control and affect the sound/lighting and run it through more FX and different samples via the other stations. The stations will be activated when the audience members step on or get near them with a light that will either strobe or light up in general when activated. Each station affects the other in some way whether its the FX stations, Microphone station, or Lighting station(s). As sound director/manager I am responsible for gathering samples and sounds that will be most appropriate for what we’re trying to achieve, which is environmental creation. I am also responsible for knowing how different controllers will affect the audio and what speaker technology we will need to execute most efficiently without confusing audience participants.

As of now this is our sound system setup:

4 AV40 speakers as surrounds in between stations at average ear height
2 stereo (speakers TBA) for the drone 1 station
2 Yamaha MS400 Loudspeakers 15″ w/ horn (stands opt.) best for bass frequencies
1 40-50W guitar amp for the pedalboard FX station 3, 5

In addition, I am in charge of collecting recording samples from the following:
Tony Regalmuto: Downtown/Airplane/City FX/ambience
Kevin Hilgeman: Scuba/Underwater/Mountain FX/ambience
James Mortensen: Beach/Waves/Boardwalk FX/ambience
Matt Rose: Creek/Frogs/Cricket/FX/Ambience

Keep in mind to record at different times (morning/afternoon/night), as many takes as possible, let sounds ring out if necessary, at least a minute for ambience, and different proximities.

Let me know if anyone else wants to record some type of environment. Leave a comment here, post in the eCompanion discussion and your journals so I can collect that data.

I was thinking we’d use a pedal board or two for FX stations that could be controlled by hand or foot. We need to establish how these stations will be semi-independent from one another and how they directly change another station’s sound. The speakers/amps we use will sufficiently support the sound of the different controllers we use (iPad, Nano pad, pedal board).

Technological Effects on Art Form of Music 5/23/12

The evolution and advances in music across the world are immense and cannot be measured due to the magnitude of how many sounds we articulate, create, experience, and communicate via acoustic/electronic instruments/processors and natural/synthetic environments. In terms of amplified Taiko drums, I don’t see how it is not Taiko because it is miked electronically  through a sound reinforcement system to distribute the sound of the drums to a small/large audience. However, when effects and digital processing are added into the equation, Taiko then transforms and transcends into the modern form of how some artists interpret the music via new technology. Using electric guitars, saxophones, cellos, violins all typically have the ability to have its raw natural sound until the amplification signal is changed by the circuitry, FX, gain, or other parameters (Filter Cutoff, EQ, Attack, Pitch).

If you want to play jazz with an electronic saxophone, then I don’t believe it changes the genre, but is rather interpreted in a different texture that is similar to the acoustically resonant woodwind instrument. In the classical world, it might be unacceptable to use an electronic keyboard to emulate a harpsichord because it requires that all the instruments of a symphony or orchestra be as true to the original composition. However, if the classical performance is being orchestrated via electronic instruments in a contemporary piece, then I don’t see how it would be unacceptable. It really all depends on how the performer will alter their sound as well as the environment the sound is being produced. If its outside without any microphones or speakers in a deep canyon, the sound will resonate and echo naturally. To emulate that electronically in a smaller concert hall, one might use the electronic means to recreate that with reverbs, echo delays, and other processing tools. The guitar art form transforms completely when you run effects through it. Anything you can amplify can essentially be altered endlessly through computer music technology.

Katsura Yabuki Performance 5/23/12

The performance sounds like an old Japanese warrior/Samurai ritual that has been transformed by new audio/visual technology. Upon first listen, it starts out with a heavily processed (reverb, delay) percussive blip drum and what sounds like a bicycle wheel chain being cranked backwards. There are many textures and  timbres present as underlying drone ambience that ranges dynamically from sub lows to rich sibilant highs probably being triggered by MIDI controllers of some sort. I also hear a harsh bell, processed hand drumming, and drumming with sticks on different surfaces. The pitch of the sounds are constantly being transformed. I feel it is organized, yet improvisational chaos in the beginning and picks up rhythms and musical interpretations starting at 12:30 in the audio example.

I believe this piece was part composition in that there was a direction in how to begin with chaos, organize into rhythmic and cohesive patterns in the middle, and tone it down with softer spatialization techniques in the end. I am almost certain parts of the composition are improvised although they might have some type of structure of when to become chaos or coherent. There is a ton of experimentation going on in terms of fusing sounds of a bicycle, processed drums, drone synths/samples, and textured ambient backgrounds. The audience is aimed toward most who have an interest in sound design, experimentation, new media technologies, and sound art. I’m sure anyone who came across this in the right setting would be fully immersed and captured by the performance.

Snippet of the 27 minute piece: http://www.youtube.com/watch?v=l8BvhsoDaek

Emerging Technologies 5/16/12

Ex. 1: Music Streaming Charts by Spotify

http://www.guardian.co.uk/media/2012/may/10/spotify-streaming-services-chart

Ex. 2: Digital Goggles by Google

http://www.guardian.co.uk/technology/2012/apr/05/google-project-glass-digital-goggles

Project 2 Proposal Update 5/16/12

 Although the final idea has not been set in stone, the class has decided to work as one unit to construct and demonstrate a soundscape environment consisting of most of our projects. As can be seen from the diagram above drawn by Frank Torres, there will be a central lighting system that changes according to the stations surrounding it. The first station will be the rain machine that Kevin, Kyle, and myself made. We will add contact mics to pick up sound and run it through FX and different samples via the other four stations. The stations will be activated when the audience members step on them and a light will either strobe or light up in general when activated. Each station affects the other and there will be a quadraphonic field of speakers as well as spatialization speakers for each station. I was dubbed Sound Director for this project, which requires that I :

• Oversee all audio concept designs for the studio.
• Help build the audio schedule and milestones within the project.
• Supervise the quality of audio across multiple projects.
• Design and produce sound effects.
• Compose and produce music.
• Oversee outsource audio services for sound, Foley, music, VO, mastering, etc.
• Oversee implementation of audio assets and assist in developing integration tools.
• Oversee the designing and equipping of the studio audio facilities.
• Act as an expert in proprietary pipelines and integration processes.

I will be working close with the artistic director to get the bigger picture accomplished by week 9 or 10.

Sound Example: http://www.youtube.com/watch?v=7CRNpJtb44c

Some Notes to Keep in Mind 5/9/12

– New media is the convergence of computing, traditional media, and networking

– Music is an art form whose medium is sound

– Common elements of music are: Pitch, Rhythm, Dynamics, Texture, Timbre

– New media is continuous (False)

– New media, like, analog media, can NOT be copied endlessly. (False)

– New media is interactive (True)

– Interactive can be described as: an individual that does something/audience influences or affects the experience/participant is affected/listener/audience influences the playback (Blu-ray)

– Social and cultural issues are considered when discussing the term “New Media” (True)

– Products that are considered “emerging technology”: Siri, Facetime on phones

– Sound Installation: large scale sound art that is placed in an immersive “space” for some amount of time so that people can come see/interact with it

Final Project? 5/9/12 (edits to come)

Considering everyone’s busy schedules, we haven’t really come together as whole class to generate ideas about what to do for our final. A sound flashmob was mentioned and could work if we had motion sensitive technology. But the question is where would we set up to give it the flashmob feel rather than having to come into the classroom. Tony mentioned something about combining and expanding our existing projects, yet there is no telling what everyone wants to do. If we do combine as one big group, I would prefer to be in the audio or construction sector of the project. An example could be combining the quadraphonic field with two contact mics on the rainstorm machine and using the sampler to make clear sunny day sounds, syncing it to a moving ocean visual that could be stormy or sunny. Depending on which button the user hits from the Aeqeus Major Minor project, the sounds can be manipulated by other users to trigger the rainstorm machine or the  sunny sampler. We’ll see what works best for all of us, hopefully we come to a creative conclusion.

Sound Installation Gallery Review 5/9/12

The May 2nd showing of the Sound for New Media gallery of sound installations was quite the experience interacting with the unique and various media types ranging from our acoustic rainstorm machine to a tweaked out quadraphonic field. The rainstorm machine sounded convincing enough to students and looked striking in its jet black frame and gold machinery. However, our plastic thunder sheet was having issues with snapping into from shaking it too rigorously. I would have liked to throw some contact microphones onto the wooden structure and run it through some reverbs and delays. The group was talking about being able to trigger the amount of “rain, wind, and thunder” and route it to sync up with the visual of an ocean. Rain could be falling harder or softer, the wind creating larger waves or just breezing by, and making the sky darker depending on how much thunder is rumbling. This structure should certainly have some additions to which I am not 100% sure of now, but the possibilities involve more interaction with users, some type of computing mechanism to sync sounds with visuals on a projector. Aside from our project, the other groups had great installations that fit well altogether. A virtual walkthrough to begin, a quadraphonic field of vocally driven delays and effects, followed by a picture MIDI music to light booth, and a sampler with a percussive drum routed to a multi-effects processor were the other gallery pieces. I believe everyone had made it appealing for our first try at making sound art. Placement was reasonable and the ability to hear and interact with the pieces was received well by users and participants.  Maybe for our final we could have a cool lighting setup to alter mood and setting and rid of the fluorescent lighting above.

Make it Rain Description 5/2/12

“Make it Rain” is an acoustic rainstorm machine complete with a rotating rain barrel, crank-able wind machine, and suspended thunder sheet to create the perfect storm without getting wet.  Users are in full control of the story behind what kind of storm they want whether its light rain or a treacherous downpour. This will be interactive for our all ages audience who can use the combination of weather machines to create a storm soundscape or story. Recording their experience is also optional both visually and aurally if they have cameras or handheld audio devices.

The concept was developed in a group meeting during the first class of Sound for New Media. At first we decided to make rain sticks out of PBC pipe, put some small materials as the rain, and decorate them, but it lacked a certain creativity that we all knew we had in us. We then came up with “Make it Rain” and drew up some diagrams of what it could possibly look like. Our inspiration came from the sound effects wizard Jimmy Macdonald who did foley and effects for Disney for countless years. He built something similar for some of the first Mickey Mouse cartoons to create wind.

Construction began with the help of master craftsman Johnny Camacho. The frame was built first and from there the other machines began to take form. After the frame was ready to hold the barrels in place, the wind machine was created with three round wooden circles that we nailed some 20 slats to for air to pass through. The crank was then added for an easy way to rotate the machine and create slow to fast gusts of wind noises. A piece of thin mesh material was added over the wind machine to create the wind itself. The next piece constructed was the octagonal rain barrel, where we put beans, airsoft pellets, and other dry hard round materials to create the rain. Chicken wire was placed on the internal walls for the “rain” to pass over. Last but not least with the help of Trevor, we placed the thunder sheet, which is suspended by shoelaces that you hold with one hand firmly from the top and shake with the other. A couple coats of black paint were then applied to the frame and gold metallic paint was applied to the barrel and wind machine. With the cooperation of all members including Nick’s dad who helped construct, we made it rain and will continue to do so for everyone who wants to participate and be a part of our rainstorm.

Think of our project as a simple base model.  The ultimate goal would be much more than just a machine.  Our idea was to have all of the pieces be controllers connected to one big program.  On the giant screen would be a video of an ocean.  As you turned the rain barrel the rain would begin to pic up on the screen.  The faster the turn on the rain barrel, the more rain would appear.  You could even do gusts of rain depending on how suddenly you turn the rain barrel.  It would work similar for both the Thunder sheet and the wind machine wheel.  That would allow an individual to literally create his or her own storm in both audio and visual form.  The project would go from a literal storm machine to a midi controller of some sort. We would all LOVE to play with it then.

 

“Ticket” Communication Experiment 5/2/12

I felt that having tickets as turns to speak had its ups and downs especially when only one person could speak and others couldn’t even if they had great ideas to add to the discussion. It hinders others from getting their points across and sometimes responses aren’t as great as participators wanted them to be. I am personally not a fan of the ticket system since I bounce off the ideas of what others have in mind and how they communicate it. Raising their hands or letting others finish their thoughts is how I work best in communicating with others. This experiment was nonetheless interesting and fun for the time being, but I would not suggest it for future discussions unless it was mandatory.

Project Proposal Update 4/25/2012

Kevin, Kyle, and I have decided to upgrade the rain stick idea and give it a story telling aspect by adding two other weather machines to make a fully functional rainstorm machine. We will add a wind wheel and a clear plastic thunder sheet that you can shake or strike with a mallet to vary the sound. This will give the audience an individualized experience to create their own rainstorm story. It will require building the sculpture using wood, some canvas, and a clear plastic sheet. The whole thing will then be painted with an appropriate color scheme to each of the different sounds being produced. For example, the thunder sheet will be dark and gloomy and the rain machine will have drops on it. For the internal materials of the rain machine, we are thinking about using pinto beans, sand, rice, screws, air-soft pellets, marbles, or smaller mobile objects that sound like rain. This should not take too long to build and test out, which is a feasible idea for the midterm that everyone can check out, record, and try out for an individualized experience. This will be interactive for the audience who can use the combination of weather machines to beat on them and create a storm soundscape or story.

Equipment List

1. Clear plastic Thunder Sheet

2. Scrap wood for the structure

3. Handle and canvas for the wind machine

Here’s a video from the special features of Wall-E describing the principles we are using to create the sound sculpture. However, we are making this a mobile all in one machine for easy travel.

6:54-8:05

Project 1 Diagram Update 4/25/12

New Sound Art Forms List

1. http://www.newsoundarts.com/

2. http://www.ubu.com/sound/russolo_l.html

3. http://arstechnica.com/science/news/2012/04/archaeoacoustics-reconstructs-the-sound-of-stonehenge.ars

(http://www.youtube.com/watch?v=uUUfeQ3nVu8&feature=player_embedded)

4. http://musicweb.ucsd.edu/concerts/

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s