Blog – Hollyland https://www.hollyland.com Hollyland Wireless Video Transmitter, Creativity Together Mon, 19 Jan 2026 02:07:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://www.hollyland.com/wp-content/uploads/2025/02/cropped-hollyland-icon-32x32.png Blog – Hollyland https://www.hollyland.com 32 32 Small Gear, Big Impact: How Lark Max 2 Simplified Corporate Interviews in Kuala Lumpur https://www.hollyland.com/blog/application/small-gear-big-impact-how-lark-max-2-simplified-corporate-interviews-in-kuala-lumpur Mon, 19 Jan 2026 02:07:19 +0000 https://www.hollyland.com/?p=503920 ]]> Hollyland’s Lark Max 2 brought professional-grade audio capture and wireless monitoring to a fast-paced corporate production in Kuala Lumpur, helping a commercial video team streamline interviews, voice-over recordings, and short film shoots in controlled indoor environments.

“It changed the way we do traditional audio recording. It’s so small, light, and easy to use.”

Meet Yuki Eyok

Based in Malaysia, Yuki Eyok works in commercial video production, handling projects that range from corporate interviews and voice-over recordings to short-form narrative films. Operating across different client environments, Yuki’s work often requires compact setups that can deliver consistent, high-quality audio without adding complexity to already busy production schedules.

The Project: Corporate Interviews and Short-Form Production

The project took place at a client office in Kuala Lumpur, involving a production team of around 20 people. The scope included corporate interviews, voice-over recordings, and short film content, all captured within indoor, enclosed environments such as office rooms and even inside vehicles.

With multiple interview candidates scheduled throughout the shoot, the team needed an audio solution that could be deployed quickly, reused efficiently, and monitored reliably without slowing down transitions between subjects.

In total, 4–5 team members interacted with the audio system during production. While two microphones were available, they were used one at a time across different interviewees, making fast pairing and stable performance essential.

Why Lark Max 2

Traditional audio setups often require larger recorders, more cabling, and a dedicated sound recordist focused solely on monitoring levels. For this project, that approach would have added unnecessary complexity to a tightly scheduled corporate shoot.

Lark Max 2 was chosen for its compact form factor, 32-bit float recording, and wireless monitoring capabilities, allowing the team to capture clean, high-quality audio while staying agile in confined spaces.

“Some projects don’t even require a dedicated sound recordist anymore.”

Solving Real Production Challenges

Throughout the shoot, Lark Max 2 demonstrated how a smaller system can deliver professional results. The 32-bit float audio provided confidence during interviews and voice-over sessions, protecting recordings from unexpected level changes without constant manual adjustment.

Wireless monitoring proved equally valuable. Directors and operators could monitor audio in real time without standing next to the camera or recorder, which was especially helpful in closed environments like office rooms and car interiors.

“So good that we can monitor audio wirelessly.”

With fewer cables and faster setup, the team was able to move smoothly between interview subjects. Pairing and switching microphones was handled efficiently, allowing production to stay on schedule even with multiple candidates.

A Shift in Workflow

Beyond audio quality, the biggest impact of Lark Max 2 was on workflow. Its small size and ease of use reduced the need for specialized audio roles on certain projects, freeing up crew resources and simplifying coordination.

“Some projects, we no longer need someone fully dedicated to sound recording.”

Stable wireless connections ensured reliable monitoring throughout the shoot, while the system’s overall simplicity made it easier for team members to adapt quickly, even when switching setups between interviews.

Takeaways

For Yuki Eyok, Lark Max 2 proved to be a practical, efficient audio solution for corporate interviews and short-form productions. By combining compact design, 32-bit float audio, and reliable wireless monitoring, it helped reduce setup time, simplify crew workflows, and maintain professional audio quality across a variety of indoor shooting scenarios.

Lightweight yet capable, Lark Max 2 allowed the team to focus less on managing equipment and more on directing, storytelling, and delivering polished content for their clients.

]]>
Born in Film, Live on Campus: How VenusLiv Air Powered a Full-Day University Podcast Production https://www.hollyland.com/blog/application/born-in-film-live-on-campus-how-venusliv-air-powered-a-full-day-university-podcast-production Mon, 19 Jan 2026 01:59:43 +0000 https://www.hollyland.com/?p=503910 ]]> Hollyland’s VenusLiv Air live streaming camera delivered reliable, broadcast-ready visuals for a full-day university podcast production, helping a small creative team run a three-camera livestream with confidence, clarity, and control.

“VenusLiv Air is a really great product for podcasting. It’s not that expensive, but it delivers high-quality video.”

Meet Franz Lopez & Born in Film

Based in the Philippines, Franz Lopez is a visual storyteller with a background in journalism, documentary, and branded content, and one of the organizers behind Born in Film, a non-profit creative community dedicated to building a sustainable and diverse ecosystem for visual storytellers.

Born in Film operates as both a creative platform and an educational hub, showcasing photography across different genres and mediums while sharing insights on new technology, reviews, and production workflows. Through workshops, podcasts, and learning initiatives, the organization aims to expand access to creative tools and knowledge for storytellers at every level.

The Project: A Full-Day University Podcast Livestream

The project took place at Ugnayang La Salle, De La Salle University Dasmarinas in Cavite, Philippines, as part of the university’s PRISM podcast program. The production featured a host and a guest speaker in a three-camera setup, livestreamed continuously across a scheduled 10-hour full day.

Despite operating with a compact crew of six, the livestream reached a peak of 9,000 views, requiring consistent image quality, reliable switching, and stable operation over extended hours.

Crew Structure

  • 1 Host
  • 1 Live Director
  • 1 Live Switcher
  • 2 Camera Operators / Technical Support
  • 1 Social Media Manager

The podcast was held inside an indoor gymnasium. To ensure stable connectivity in a challenging indoor environment, the team relied on Starlink for internet access, allowing the livestream to remain uninterrupted throughout the day.

Why VenusLiv Air

Before VenusLiv Air, the team had relied on action cameras, which presented limitations for podcast production. Ultra-wide fields of view and weak low-light performance made it difficult to achieve a natural, professional look suitable for long-form conversation.

VenusLiv Air was introduced as the main camera to address these challenges. Designed specifically for live production, it delivered tighter framing, improved low-light performance, and a more refined image suitable for a university podcast setting.

“One of the top-of-the-line cameras that can compete with mirrorless cameras and DSLRs.”

Solving Real Production Challenges

During the 10-hour livestream, VenusLiv Air demonstrated the consistency and reliability needed for extended productions. As the primary camera, it provided clean, stable visuals that held up throughout long sessions without requiring constant adjustment.

In the indoor gymnasium environment, lighting conditions were far from ideal. VenusLiv Air’s image quality allowed the team to maintain clear, balanced visuals, avoiding the flat or distorted look often associated with action cameras in low-light spaces.

With the live director and switcher coordinating camera feeds in real time, VenusLiv Air integrated smoothly into the multi-camera workflow, supporting confident live switching and on-the-fly direction.

Takeaways

For Franz Lopez and the Born in Film team, VenusLiv Air proved to be a practical and capable solution for long-form podcast livestreaming. By delivering high-quality video output in challenging indoor conditions, it helped elevate the visual standard of a university production without the complexity or cost of traditional mirrorless or DSLR setups.

Reliable over long hours and adaptable to a small crew workflow, VenusLiv Air enabled the team to focus on storytelling, conversation, and audience engagement rather than technical limitations.

]]>
What is Image Overlay? The Filmmaker’s Guide to Visual Effects & Monitoring https://www.hollyland.com/blog/tips/what-is-image-overlay Fri, 16 Jan 2026 08:57:44 +0000 https://www.hollyland.com/?p=503026 ]]> If you are wondering what image overlay is and how you can use it in filmmaking, think of it as a simple creative technique. You place one visual, like a video clip, graphic, or texture, on top of another to add style or meaning.By stacking these layers and adjusting their opacity or blend modes, filmmakers can create a composite image that combines the visual information of both sources into a single frame.

While most beginners associate overlays strictly with editing skills, professional filmmakers use them in two distinct environments:

  1. Artistic (Post-Production): In post-production, editors use image overlays for creative touches. They place things like film grain, light leaks, or logos on a video layer above your main footage. This helps them add texture, depth, or even branding without much effort.
  2. Technical (On-Set Monitoring): As a director or cinematographer, you can use external monitors with overlay features to make your job easier. You ghost a previous take over the live shot to keep things consistent. Or, use frame guides to line up your composition more accurately.

Type 1: Using Image Overlays in Post-Production (Editing)

When you’re editing, image overlays are mostly used to make videos look more creative and clean.  Most modern digital cameras produce images that are clinically sharp and clean. Even if the footage looks technically flawless, it can sometimes feel a bit lifeless.

Filmmakers use overlays to reintroduce organic imperfections, add stylistic lighting, or insert essential branding elements. Below are the three most common categories of post-production overlays and how to apply them.

Texture Overlays: Creating the “Film Look”

Texture overlays are used to intentionally degrade the footage to mimic the aesthetic of analog film.

  • Film Grain: Digital sensors do not have grain; they have noise. Overlaying scanned 16mm or 35mm film grain adds a pleasing, organic texture that “glues” your edit together.
  • Dust and Scratches: These mimic the physical wear and tear of a film reel. They are excellent for flashback sequences, horror films, or vintage-style music videos.

Light Effects: Leaks and Flares

Light overlays add depth and dynamism to a flat shot.

  • Light Leaks: These occur when light accidentally hits film stock. In digital editing, they are often used as transitions. A bright flash of orange or red light can easily cover up a cut between two very different clips.
  • Lens Flares & Bokeh: If you are shooting against a flat background, overlaying out-of-focus light elements (bokeh) in the foreground creates an illusion of depth and complexity.

Graphics: Logos and Lower Thirds

Unlike textures, which usually cover the whole screen, graphic overlays provide specific information.Alpha Channels: Most graphic overlays (like a YouTuber’s “Subscribe” button or a documentary Lower Third) are exported as PNGs or ProRes 4444 files. These formats contain an Alpha Channel, meaning the background is already transparent. You simply place them on the top video track, and no blending effects are required.

Mastering Blend Modes

If your overlay isn’t a transparent PNG, like a film grain clip that looks like grey noise or a light leak on black, you’ll need to use Blend Modes to make it work.

Blend modes tell your editing software how to mix the pixels of the top layer (the overlay) with the bottom layer (your footage). Choosing the wrong mode will result in a muddy image or a black screen.

Here is a quick reference guide to the essential blend modes for filmmakers:

Blend ModeHow It WorksBest Used For 
ScreenRemoves Black. It keeps only the bright pixels of the overlay and makes the dark pixels transparent.Light Leaks, Fire, Sparks, Lens Flares. (Any effect filmed against a black background).
MultiplyRemoves White. It keeps the dark pixels and makes the bright pixels transparent.Paper Textures, Ink Spills, Grunge. (Great for mixed-media collages).
Overlay / Soft LightAdds Contrast. It mixes the layers to brighten the highlights and darken the shadows while applying the texture.Film Grain, Noise, Subtle Textures. (Soft Light is a gentler version of Overlay).

Type 2: Using Image Overlays On-Set (Monitoring & Continuity)

While post-production overlays focus on aesthetics, professional filmmakers rely on technical overlays during the actual shoot. When you’re on set, overlays aren’t just for looking cool. They help you make sure everything is accurate, flows smoothly, and looks balanced.

Using overlays on your field monitor allows the Director of Photography (DP) and Director to visualize the final output before the camera even rolls, saving hours of frustration in the editing room.

Ensuring Continuity with “Ghosting” (Onion Skinning)

One of the most powerful uses of image overlay on-set is Ghosting (also known as Onion Skinning). This feature lets you grab a still frame from an earlier shot and overlay it on your live camera feed at about half opacity (around 50%). 

It’s useful for helping you with:

  • Stop Motion Animation: Ensuring the subject moves the exact distance required between frames.
  • Match Cuts: Aligning an actor’s eyes or a prop in the exact same screen position as the previous scene to create a seamless transition.
  • Reshoots: If you are returning to a location days later, a ghost overlay ensures you match the exact camera angle and focal length of the original footage.

Framing for Multiple Platforms (Aspect Ratio Overlays)

In modern production, one video often has to fit YouTube (16:9), Instagram Reels (9:16), and a website header (21:9). As you noticed that each aspect ratio is different, so shooting without some visual guides can make it challenging to get everything to look right.

Aspect Ratio Overlays place semi-transparent black bars or guide lines over your monitor’s image. This helps the camera operator frame the subject perfectly for the main format. It also ensures nothing gets cut off when the video is cropped for vertical social media posts.

Wireless Monitoring and Overlay Integration

To effectively use these tools, the crew needs to see them in real-time without being tethered to the camera. This is where a robust wireless video transmission workflow becomes the backbone of the production.

So, if you’re looking for smooth on-set monitoring and communication, the Hollyland Pyro 5 is a proven boon for professional crews and for pretty solid reasons! 

  • It’s a 3-in-1 device that works as a monitor, transmitter, and receiver (no swapping of gears). It gives directors and focus pullers direct access to live camera feeds. 
  • Using this external wireless transceiver on a sunny day is no issue. Thanks to its 1500 nits of brightness, which keeps the screen clear and easy to view. 
  • It functions effortlessly with the entire Pyro ecosystem, including Pyro S, Pyro H, and Pyro 7.

More importantly, this device supports 3D LUT overlays and professional monitoring tools. You can apply a specific color look (LUT) to the monitor feed to preview the final color grade, or toggle frame guides on and off wirelessly. This helps everyone on set, from the lighting crew to the script supervisor, see the image that matches your creative vision before you start filming. That way, everyone knows the look and feel you’re going for and can make it happen.

Step-by-Step: How to Add an Image Overlay in Your Editor

Compositing works the same way, no matter if you’re using Adobe Premiere Pro, DaVinci Resolve, or Final Cut Pro. You place different layers on top of each other. And once done, you tell the software how they should appear together. 

So let’s check the universal method to apply textures, light leaks, or logos to your footage.

  1. Import Your Assets
    Bring both your primary video footage (the background) and your overlay asset (the texture, grain, or graphic) into your project bin.
  2. Stack the Layers on the Timeline


Drag your main video clip onto Video Track 1 (V1). Next, drag your overlay asset onto Video Track 2 (V2), directly above the clip you want to affect. Think of this like a physical stack of paper; the clip on top covers the clip on the bottom.

  1. Access the Compositing Controls


Select the overlay clip on V2. Open your editing software’s effects panel:

  • Premiere Pro: Go to the Effect Controls panel.
  • DaVinci Resolve: Open the Inspector tab.
  • Final Cut Pro: Check the Video Inspector tab.
  1. Change the Blend Mode


Locate the “Blend Mode” or “Composite Mode” dropdown menu. By default, this is set to “Normal.” Change this based on your asset type (refer to the table in the section above):

  • Select Screen for black backgrounds (light leaks).
  • Select Multiply for white backgrounds (paper textures).
  • Select Overlay for mid-tone textures (grain).
  1. Adjust Opacity to Taste


Rarely should an effect be left at 100% intensity. Lower the Opacity slider to blend the effect naturally. For film grain, 20–30% opacity is often the “sweet spot” where the texture is felt but not distracting.

Where to Find High-Quality Image Overlays (Free & Paid)

A high-quality overlay blends seamlessly into your footage, while a low-quality one looks like a cheap sticker. Depending on your budget and needs, here are three main sources for sourcing professional overlays.

1. Dedicated VFX Libraries

For the most realistic results, look for libraries specifically designed for filmmakers. Unlike generic stock sites, these platforms often provide ProRes or RAW files with high dynamic range.

  • CinePacks & Lens Distortions: Known for authentic 4K light leaks, smoke, and atmospheric fog captured in-camera.
  • Tropic Colour: Excellent for film grain scans and vintage textures.
  • ActionVFX: The industry standard for explosions, fire, and debris elements (often used in Hollywood productions).

2. Stock Footage Platforms

If you need lots of different effects, like “old tape” glitches or floating dust particles, using a subscription stock site usually gives you the most for your money. You get access to many assets all at once, which makes it easier to find what you need.

  • Envato Elements & Motion Array: Great for “all-in-one” subscriptions that include motion graphics templates and overlay packs.
  • Storyblocks: Offers a massive library of generic overlays like rain, snow, and bokeh.
  • Pexels & Pixabay: Good place to start if you want free stock video overlays. However, not all the videos you find there will be of high quality and resolution. So you will need to spend more time finding the right footage that fulfills your requirements.

3. Create Your Own (DIY)

Sometimes the specific texture you need doesn’t exist yet. You can create custom overlays using tools you likely already own.

  • Photoshop/After Effects: Perfect for creating static overlays like binoculars, viewfinders, or grunge borders. Export these as PNGs with a transparent background or black-and-white JPEGs to use with blend modes.
  • In-Camera Capture: You can film your own light leaks by “lens whacking” (detaching the lens slightly) or shooting light sources against a dark background. This guarantees your overlay matches the exact camera sensor and noise profile of your main footage.

Pro Tip: Watch Your Resolution

Always ensure your overlay resolution matches your timeline. If you are editing a 4K project, avoid using 1080p overlays. If you try to stretch a 1080p grain or dust clip to fill a 4K frame, it will end up looking blurry and pixelated. That messes with the whole high-quality look you’re going for.

Common Mistakes to Avoid When Using Overlays

Overlays can make your video look great, but using them the wrong way can make it look cheap. If the effects take attention away from the story, people will stop watching 

So let’s avoid the following mistakes:

1. Overusing Effects (“The Muddy Look”)

Less is almost always more. A common beginner mistake is stacking multiple film grains, light leaks, and grunge textures on a single clip. This reduces the contrast of your original footage and creates a “muddy,” washed-out image.

  • The Fix: Stick to one or two subtle overlays. If you can’t see the subject clearly because of a massive lens flare, reduce the Opacity or change the Blend Mode.

2. Mismatched Frame Rates

If things don’t line up technically in the right way, it can make the scene feel fake and break the attention of your viewers.. If you are editing a cinematic timeline at 24fps but apply a stock footage overlay (like rain or fire) that was rendered at 30fps or 60fps, you will likely introduce “judder” or stuttering.

  • The Fix: Always check the frame rate of your assets before importing. If they don’t match, interpret the footage in your NLE to match your timeline, or find assets that are natively compatible.

3. Ignoring Color Space and Workflow Order

If you apply a high-contrast Rec.709 overlay, like a pre-rendered HUD graphic, above a flat LOG footage before color grading, it can look weird and not match the rest of your video. The overlay will look crushed and saturated, while the footage looks gray and flat.

  • The Fix: Apply your overlays after your color grade or color space transform (CST). This ensures the overlay interacts with the final, corrected image rather than the raw camera data.

Subtlety is Key

Great filmmaking is all about the little details. Just like bad audio can mess up awesome visuals, having the right tools, like the Hollyland LARK MAX 2, is super important for capturing clear sound.

  • Monitor audio wirelessly with the new OWS earphones
  • 32-bit Float recording for no distortion, even in loud scenes
  • Studio-quality 48kHz/24-bit audio for crystal-clear sound
  • Professional ENC noise cancellation to keep the audio clean in any environment

Frequently Asked Questions (FAQ)

What is the best blend mode for removing black backgrounds?

The Screen blend mode is the industry standard for removing black backgrounds. This mode tells your editing software to display only the pixels that are brighter than the layer below it, effectively turning pure black (RGB 0,0,0) transparent. This is essential for using “organic” overlays like light leaks, fire, smoke, or dust.

Can I use image overlays on mobile editing apps?

Yes, most modern mobile video editors support advanced overlay features. Apps like CapCut, VN Video Editor, and LumaFusion allow you to stack multiple video or image tracks.

  • In CapCut: Look for the “Overlay” button to add a second layer, then use the “Splice” menu to access blend modes like Filter, Lighten, or Screen.
  • In VN: Use the “PIP” (Picture-in-Picture) track to import your overlay assets.

What is the difference between an overlay in photography vs. video?

They are basically the same idea. You stack layers to combine visuals, but the process can be more or less complicated. Photography involves blending static images, while Video requires you to manage duration, frame rates, and motion. For example, if your film grain overlay is only 10 seconds long but your scene is 30 seconds, you must loop the overlay on the timeline to cover the full duration.

Conclusion

Image overlays aren’t just a trend. They add cool effects like film grain or light leaks, and help keep things smooth during filming. Knowing how to use layers, whether in DaVinci Resolve or on location, can make your videos look more professional. For this, you need reliable gear like wireless monitors and good microphones. You can check out Hollyland’s ecosystem of filmmaking tools to enhance the quality of video projects.

]]>
What is Focus Peaking? The Ultimate Guide to Sharper Images & Video https://www.hollyland.com/blog/tips/what-is-focus-peaking Fri, 16 Jan 2026 08:45:35 +0000 https://www.hollyland.com/?p=503025 ]]> Autofocus is powerful, but it isn’t perfect—especially for macro work, low light, or cinematic video. You might be confused about focus peaking and how it helps. Well! It shows exactly what looks sharp while you shoot.

Focus peaking is the industry-standard tool for mastering manual focus

So here, you will learn how focus peaking works. We’ve also shared some thoughtful reasons why you should use it over autofocus.

What is Focus Peaking and How Does It Work?

Focus Peaking works by digitally analyzing the image for areas of high contrast. When your image is in focus, the edges are sharp (high contrast). If it’s blurry, the edges look soft (low contrast). The camera recognizes the sharp edges and puts a colored highlight on them, usually red, yellow, or white.

The Technology Behind the “Shimmer”

Focus peaking relies on an edge-detect filter similar to those used in image editing software. The camera’s processor scans the live video feed millions of times per second to find distinct transitions between light and dark.

  • Blurry Image: Edges are soft and gradual. The processor detects low contrast and displays no peaking highlights.
  • Sharp Image: Edges are crisp and abrupt. The processor detects high contrast and immediately “paints” that specific area with your chosen peaking color.

What You See in the Viewfinder

When you enable focus peaking and begin rotating your lens’s focus ring, you will see a colored “shimmer” or “static” appear on objects as they enter the depth of field.

For example, if you are filming a portrait:

  1. As you turn the focus ring, the background might shimmer red (indicating the background is sharp).
  2. As you continue turning, the red highlight moves forward to the subject’s ears.
  3. Finally, the red highlight settles on the subject’s eyes and eyelashes.

Once the highlights show up right where you want, you can tell your shot is sharp without staring at the small screen. Yes! No more squinting at your camera’s LCD.

Why You Should Use Focus Peaking (vs. Autofocus)

Modern Autofocus (AF) systems work for general snapshots, but for creative work, trusting the camera to choose your focal point often leads to ruined footage.

Focus peaking makes manual focusing way easier by digitally highlighting sharp areas using colors. It’s faster than just relying on your eyes and helps you get accurate shots even in challenging areas like low light or with a shallow depth of field.

1. Macro Photography

When you’re shooting macro, the area that’s in focus (depth of field) is usually super tiny (sometimes less than a millimeter).

  • The AF Problem: Autofocus often struggles to lock onto the specific texture you want (like the eye of an insect) and instead jumps to a leg or a leaf in the background.
  • The Peaking Solution: Focus peaking shows you exactly which part of your subject is sharp. You can actually see the highlight shimmer across it, so you know the moment the important detail is in focus.

2. Video Production

When shooting a video, if a lens is constantly hunting for focus, it can totally ruin a shot.

  • The AF Problem: If a subject moves slightly or an object crosses the frame, AF systems often “pulse” or hunt for focus. This creates a distracting “breathing” effect that makes footage look amateur.
  • The Peaking Solution: Professional videographers rely on manual focus to ensure consistency. Peaking allows you to execute a smooth rack focus (shifting focus from a foreground object to a background object) with confidence, knowing exactly when your target is sharp without the camera guessing for you.

3. Low Light Environments

Autofocus sensors need enough light and contrast to do their job.

  • The AF Problem: In low light, cameras have trouble finding edges, so the lens keeps hunting back and forth (zooming in and out). It happens because it cannot find a focal point and therefore fails to lock on.
  • The Peaking Solution: Even in grainy, low-light conditions, focus peaking can detect edge contrast that the AF system misses. By switching to manual and watching for the highlight overlay, you can lock focus in the dark much faster than an automated motor can.

4. Using Vintage Lenses

One of the biggest advantages of mirrorless cameras is the ability to adapt vintage “legacy glass” (like old Canon FD, Nikon AI, or Minolta lenses).

  • The AF Problem: These lenses have no electronic contacts, meaning autofocus is impossible.

The Peaking Solution: Focus peaking makes manual focusing on vintage lenses faster and more accurate than it ever was on the original film cameras. You can get that classic vintage look while keeping everything sharp with modern clarity.

How to Set Up and Use Focus Peaking (Step-by-Step)

While every camera manufacturer (Sony, Canon, Nikon, Fujifilm) buries the setting in a slightly different menu, the workflow for enabling and using focus peaking is universal. Follow these five steps to take full control of your manual focus.

  1. Switch to Manual Focus (MF)
    Focus peaking generally will not activate while you are in Autofocus mode. First, switch your lens or camera body to MF.
  • On the Lens: Look for a physical switch labeled AF/MF and slide it to MF.
  • On the Body: Some cameras (like Fujifilm or older Nikon DSLRs) have a physical dial on the front or top of the body.
  • In the Menu: If your lens lacks a switch (common with Sony E-mount or Canon RF kit lenses), go into your camera’s “Focus Mode” settings and select Manual Focus.
  1. Enable Peaking in the Menu
    Navigate to your camera’s settings. This is typically found under “Focus Assist,” “Display Settings,” or the “Shooting Assist” tab. Look for an option labeled Peaking Setting or Peaking Display and turn it ON.
  2. Choose Your Peaking Color

Don’t just stick with the default color. You need a color that contrasts heavily against your specific subject and background so the highlights are clearly visible.

  • Red: The industry standard. Great for landscapes and architecture. Avoid using red when shooting portraits, as it can blend in with skin tones.
  • Yellow/White: Excellent for low-light situations where red might get lost in the shadows.
  • Blue: Ideal for shooting warm subjects, sunsets, or anything with red/orange hues.
  1. Set the Sensitivity Level (Peaking Level)

This controls how “strict” the camera is about what it considers to be in focus.

  • Low Sensitivity (Recommended): Most cameras can be picky. They only highlight edges that are pretty sharp. This is harder to see but offers the highest precision for macro or 4K video.
  • High Sensitivity: The camera is easy to use. It shows parts that are mostly in focus, so the edges are easy to see. But here’s a twist! At the same time, it can make a soft shot look sharp. So the best situation to use this is when you need to shoot quickly.
  1. Rotate the Focus Ring

Look at your LCD screen or Electronic Viewfinder (EVF). As you rotate the focus ring on your lens, you will see a colored “shimmer” or static-like noise appear on the edges of objects.

  • The Goal: Rotate the focus ring while watching your screen. Keep going until the colored highlights are brightest on your subject’s most important part, usually the eyes.
  • The Result: When the highlights fully cover the edges, that area is perfectly sharp, and you’re ready to take the shot.

Advanced Technique: Using Focus Peaking with External Monitors

Relyingsolely on a 3-inch LCD screen has its limits. When you are shooting in 4K or 8K resolution, the details are so fine that a small camera screen simply cannot display enough resolution for you to judge critical sharpness accurately. What looks sharp on a 3-inch screen often looks soft when blown up on a desktop monitor.That’s why professionals almost always use external monitors, like ones from Hollyland, Atomos, or SmallHD. These bigger screens, usually 5 to 7 inches or more, make it much easier to see focus peaking highlights. Plus, external monitors have better peaking controls. Unlike your camera’s built-in options, they let you adjust the intensity, thickness, and color of the overlay so it never gets in the way of your shot.

The “Focus Puller” Workflow & Wireless Transmission

In high-end productions, the person moving the camera (the operator) is rarely the person controlling the focus. That job belongs to the 1st AC (Assistant Camera), also known as the focus puller.

To pull focus accurately without physically touching the camera—which could shake the rig or crowd the operator—crews use a wireless workflow. This is where Wireless Video Transmission Systems become the industry standard.

With a system like the Hollyland Pyro 5, your camera sends a clean and high-quality video feed straight to the focus puller’s monitor. But that’s just the beginning! Focus peaking becomes much easier with this tool because it’s a:

  • 3-in-One Device: The monitor, transmitter, and receiver are all in one, so you don’t need extra gear, and the team can work smoothly.
  • 1500 Nits Brightness: Focus peaking is simple to see even in bright sunlight, so outdoor focusing is simple.
  • Strong Pyro Ecosystem: It connects effortlessly with other Pyro series devices, including Pyro S, Pyro H, and Pyro 7, keeping the system reliable and well-synced.

By keeping framing and focus separate, your visuals stay sharp and look good. Whether you’re shooting alone or with a crew using wireless feeds, using focus peaking is the fastest way to get great shots..

Common Focus Peaking Mistakes to Avoid

These are the most common mistakes to avoid when using manual focus.

1. The “High ISO” Trap

Unfortunately, digital noise (grain) caused by high ISO settings also looks like high contrast to your camera’s processor.

When shooting in low light with a high ISO (e.g., ISO 3200 or above), the entire screen may look like it is covered in “static” peaking highlights. This makes it impossible to distinguish the actual subject from the background noise.

  • The Fix: In low-light scenarios, reduce your peaking sensitivity to Low or turn peaking off entirely and rely on Focus Magnification instead.

2. Setting Peaking Level Too High

Most cameras offer sensitivity settings: High, Mid, and Low.

  • High Sensitivity: The camera is quite generous about what it counts as “in focus.” It will highlight areas that are acceptable but not critically sharp.
  • Low Sensitivity: When you need true accuracy, it is wise to set the sensitivity to Low or Mid.

Beginners often set sensitivity to High because it makes the highlights easier to see. However, this often results in images that look sharp on the small LCD screen but appear soft when viewed on a 4K monitor. For critical work, always use Low or Mid sensitivity to ensure true precision.

3. Ignoring “Focus Magnification” (Punch-In)

Focus peaking gives you a wide view, and not a close-up. It’s great for quickly identifying the focal plane, but it usually isn’t accurate enough for tiny details, like an actor’s eyelashes or the texture of a product.

Never rely only on peaking for static shots. Use peaking to get the focus 90% of the way there, then activate your camera’s Focus Magnification (Punch-In) feature to verify the final 10%. Peaking gets you in the neighborhood; magnification rings the doorbell.

4. Shooting Through Obstructions

Because peaking loves hard edges, it struggles when you are shooting through foreground elements like chain-link fences, dirty windows, or foliage. If you see the foreground glowing with your peaking color, do not assume your subject is also sharp. Always check that the shimmer has passed the obstruction and settled on what you want to focus on.

Frequently Asked Questions (FAQ)

Does focus peaking show up in the final photo or video?

No, absolutely not. Focus peaking is strictly a display overlay used for monitoring purposes, much like your battery life indicator, histogram, or zebra stripes. While you see the colored highlights on your LCD screen or electronic viewfinder (EVF) while shooting, they are not recorded onto the memory card. Your final image will be clean.

Which focus peaking color is best to use?

There is no single “best” color; the right choice depends entirely on your shooting environment. You need a color that creates the highest contrast against your subject so you can see it clearly.

  • Red: The most common standard, but avoid it if you are filming red flowers or warm skin tones.
  • Yellow/White: Excellent for low-light situations or darker subjects.
  • Blue: Ideal for warm scenes (sunsets, deserts) where red or yellow might blend into the background.

Is focus peaking available on DSLR cameras?

Yes, but with limitations. Because DSLRs use an optical viewfinder (mirrors), they cannot display digital overlays like peaking through the eyepiece. However, most modern DSLRs (like the Canon 5D Mark IV or Nikon D850) offer focus peaking when you switch to Live View mode using the rear LCD screen. Mirrorless cameras generally offer a better experience because they can display peaking in both the LCD and the electronic viewfinder.

Can I use focus peaking with Autofocus?

In most cases, yes. Many camera manufacturers (including Sony and Fujifilm) offer an “AF+MF” mode. This allows the autofocus to get you close to the target, while focus peaking remains active to show you exactly what is sharp. This is incredibly useful for verifying that the AF system hasn’t accidentally locked onto the background or an eyelash instead of the pupil.

Conclusion

Focus peaking isn’t just a colorful overlay. It makes manual focusing easier while letting you see exactly what’s in focus. By showing contrast in real time, it helps you trust your eyes, even when filming wide open or in 4K. Whether you’re looking at your camera’s LCD or sending a high-resolution feed to a Hollyland wireless monitor, the correct use of focus peaking is the best way to get cinematic sharpness.

Your Next Step:

Don’t just read about it—build the muscle memory.

  1. Grab your camera and switch the lens to Manual Focus (MF).
  2. Enable Focus Peaking in your menu (try Red, High Level).
  3. Aim at a stationary object like a coffee mug or a plant.
  4. Rotate the focus ring back and forth until the “shimmer” lands perfectly on the texture of the object.

Once you master this on a still subject, you will be ready to tackle moving subjects and professional video work with confidence.

]]>
What is a False Color Chart? The Cinematographer’s Guide to Perfect Exposure https://www.hollyland.com/blog/tips/what-is-a-false-color-chart Fri, 16 Jan 2026 08:39:45 +0000 https://www.hollyland.com/?p=503024 ]]> Trusting your camera’s LCD screen is one of the most common mistakes in filmmaking. You think the image is bright enough, but it’s not! That’s how regular monitors trick the eyes.

To get consistent, professional results, you need a tool like False Color

Think of False Color as a “heat map” for your exposure. Instead of showing you the image as the human eye sees it, this mode replaces the image with a specific color palette based on luminance (brightness) values. It strips away the texture and detail, leaving you with a clear, color-coded map that tells you exactly where your light levels sit on the IRE scale.

What is a False Color Chart?

False Color is an exposure monitoring tool that assigns specific colors to luminance values based on the IRE scale. It allows filmmakers to instantly identify overexposed highlights (often Red), underexposed shadows (Purple/Blue), and properly exposed skin tones (Pink/Green) without relying on the brightness of their monitor.

If you learn this chart, you can set exposure without guessing. Whether you are shooting in a dark room or under bright sunlight, the False Color values remain absolute.

Understanding the IRE Scale: The Backbone of False Color

False Color is built on the IRE scale (Institute of Radio Engineers).

Think of IRE as a percentage ruler for light. It measures exactly how bright or dark every pixel in your image is. The scale typically runs from 0 to 100, representing the dynamic range of a standard Rec.709 video signal.

Here is the breakdown of the two critical extremes:

  • 0 IRE (Absolute Black): This is the floor. At 0 IRE, the sensor is recording no light information. If your shadows hit 0, they are “crushed.” Even if you try to fix it in post-production, you will only get noise or black pixels, not details.
  • 100 IRE (Absolute White): This is the ceiling. At 100 IRE, the sensor is fully saturated. This is known as “clipping.” Any detail in this range (like texture in a cloud or a bright window) is lost forever and will appear as a flat, featureless white blob.

Why this matters:

False Color takes this numerical scale and paints it. Instead of guessing if a highlight is at 90 IRE (safe) or 100 IRE (clipped), the monitor assigns a specific color (usually red) to anything hitting 100 IRE. By understanding that the goal is usually to keep your image data strictly between 0 and 100, you can instantly spot where your exposure is failing.

How to Read a False Color Chart (Color Key)

Different cameras like ARRI, RED, and Sony, or monitors like SmallHD and Hollyland, might use slightly different colors, but the danger zones and skin tones stay the same. Therefore, the goal is pretty simple: Keep your subject in the safe colors (Pink/Green/Grey) and keep essential details out of the danger zones (Red/Purple).

Standard False Color Reference Table

Use this table as a quick reference guide for most standard false color tools.

Color IndicatorIRE Range (Approx.)MeaningAction Required 
Red95 – 100 IREClipping (White)Stop Down. Detail is lost; highlights are blown out.
Yellow80 – 95 IRENear ClippingCaution. Safe for bright light sources (windows, lamps), but risky for faces.
Pink50 – 70 IRELighter Skin TonesPerfect. Typical target for Caucasian skin or highlighted areas of darker skin.
Green40 – 50 IREDarker Skin / 18% GreyPerfect. Typical target for darker skin tones or the shadow side of a face.
Grey40 – 42 IREMiddle GreyBaseline. Use this to expose a grey card for neutral exposure.
Blue / Purple0 – 10 IRECrushed BlacksOpen Up. Detail is lost in the shadows; image will be noisy.

Decoding the Critical Zones

1. The Danger Zones (Red & Purple)

If you see Red on a forehead or a white t-shirt, you have lost that data forever. It will be pure white in post-production. Similarly, Purple indicates “crushed blacks,” meaning there is no texture in the shadows.

  • Pro Rule: A light bulb can be red, and a cave can be purple, but a person’s face should never be those colors.

2. The Skin Tone Zone (Pink & Green)

  • Pink: Often represents roughly one stop over 18% grey. For lighter skin tones, you generally want the illuminated side of the face to show Pink.
  • Green: Often represents 18% grey or slightly above. For darker complexions, or the shadow side of a lighter face, Green indicates a healthy exposure.

3. The Baseline (Middle Grey)

If you are using a standard 18% Grey Card to set your base exposure, it should appear Green or Grey on your monitor (depending on the specific brand’s mapping). Once your grey card “lights up” in the correct color, you know your base ISO and aperture are technically correct for the scene.Note on Brand Variations: Always check your specific monitor’s manual. For example, ARRI False Color is the industry standard, but some external monitors use a “Spectrum” mode that introduces more colors (like Orange or Teal) for finer gradients. Stick to the ARRI standard whenever possible for consistency.

Step-by-Step: How to Use False Color for Perfect Skin Tones

Here’s a simple way to get a true skin exposure every time using a False Color chart.

1. Activate False Color

Turn on the False Color tool on your external monitor or camera. Your image will instantly turn into a psychedelic heat map. Do not panic; this is simply your luminance values being translated into color data.

2. Identify and Expose for Skin Tones

You need to adjust your lighting (intensity), aperture (f-stop), or ISO until your subject’s skin lands in the correct color zone.

While every False Color chart varies slightly by manufacturer, the general standards are:

  • For Lighter Skin Tones (Caucasian/Pale): Aim for Pink.
  • Pink usually represents roughly 70 IRE (highlights on the face).
  • If you see Yellow or Red on the skin, it is overexposed.
  • If you see Green or Grey, it is likely underexposed.
  • For Darker Skin Tones: Aim for Green or Dark Grey.
  • Darker complexions typically reflect less light, sitting between 40–55 IRE.
  • Aim to have the highlights of the cheekbones or forehead touching the upper edge of Green or transitioning into Pink depending on the specific depth of the skin tone.

Action: Rotate your lens aperture ring or adjust your key light brightness until the “target color” covers the lit side of the subject’s face.

3. Check for Dangerous Highlights (Clipping)

Once the skin looks good, scan the rest of the frame. Look for bright Red patches.

  • Red indicates 95–100 IRE, meaning those areas are clipping and losing data.
  • If a practical light or the sun is red, that is usually acceptable.
  • Note: If you see Red on your subject’s forehead, nose, or cheekbones, you are “clipping the skin channel.” You must stop down or diffuse the light immediately, or that data will be unrecoverable in post-production.

4. Deactivate and Compose

False Color is an analytical tool, not a compositional one. It is distracting to frame a shot while looking at a predator-vision heat map.

  • Once exposure is locked, turn False Color OFF.
  • Return to your standard Rec.709 or Log view to check focus, framing, and color contrast.
  • Trust the reading you just took—even if the image looks slightly flat on the monitor, the data proves your exposure is mathematically correct.

False Color vs. Other Exposure Tools (Zebra, Waveform, Histogram)

Modern cinema cameras and monitors like the Hollyland Pyro 5 offer a suite of exposure tools. While each has its own uses, most require mental effort to understand. However, False Color stands out because it provides spatial context, as it tells you exactly where your exposure values land on the image itself.

Here is how False Color compares to the standard alternatives.

The Histogram: Good for Data, Bad for Context

The histogram is the most common tool, found on everything from iPhones to cinema cameras. It shows a graph of the brightness distribution across the entire image.

  • The Problem: It is global, not spatial. A histogram can tell you that something in your shot is overexposed, but it won’t tell you if it’s the sun in the sky (acceptable) or your actor’s forehead (unacceptable).
  • The Verdict: Use it for a quick check of overall dynamic range, but never rely on it for lighting specific subjects.

Zebra Stripes: The “Danger” Warning

Zebras are a binary tool. You set a threshold (e.g., 100 IRE  or 70 IRE), and the camera overlays striped lines on any part of the image that exceeds that limit.

  • The Problem: It only shows one or two specific values. If you set Zebras to 100% to protect highlights, you have absolutely no data regarding your shadows or mid-tones. It shows you what’s risky, but it doesn’t help you control the light.
  • The Verdict: Works well for quick documentary filming to prevent clipping, but not for fine-tuning exposure.

Waveform Monitor: The Technical Standard

Professional DPs often rely on the Waveform because of its precision. Unlike the Histogram, the Waveform has horizontal spatial accuracy. If a bright object is on the left side of the frame, the spike on the graph appears on the left.

  • The Problem: It lacks vertical context. It overlays your image and sometimes gets in the way. You have to think about which waveform spike belongs to which object, which can be slow.
  • The Verdict: The gold standard for balancing lighting ratios in a controlled environment, but significantly slower to read than False Color.

Why False Color Wins

False Color is the only tool that creates a direct visual map. It paints the exposure data directly onto the object. You don’t have to look at a graph and wonder, “Is that spike the lamp or the window?” You simply look at the lamp; if it’s red, it’s clipping. If the face is pink, it’s exposed correctly.

Comparison at a Glance:

FeatureHistogramZebrasWaveformFalse Color 
Data TypeGlobal GraphThreshold WarningHorizontal GraphFull Image Overlay
Spatial ContextNoneYes (Limited)Horizontal Only100% Spatial
Best Used ForQuick dynamic range checkPreventing clipping (Safety)Precise lighting ratiosSkin tones & scene balance
ReadabilityLow (Abstract)High (Simple)Medium (Technical)High (Intuitive)

The Gear: How to Get False Color on Your Setup

If you are shooting with a dedicated cinema camera (like a RED Komodo, ARRI Alexa, or Blackmagic Pocket Cinema Camera), you likely have a False Color button mapped to a custom key. However, the vast majority of mirrorless hybrid cameras and DSLRs (such as the Sony Alpha series, Canon R series, or Panasonic Lumix) do not have internal False Color.

These cameras rely on simpler tools like Histograms or Zebras. These tools are useful, but they don’t show which part of the image hits 50 IRE. They just represent the overall light distribution. To fix this, you need an external monitor. A dedicated monitor takes the clean video feed from your HDMI or SDI port and processes the signal to generate a False Color overlay in real-time.

One-Stop-Solution: Hollyland Pyro 5

The Hollyland Pyro 5 gives you a power pack in a compact 5.5-inch display. This 3-in-1 transceiver gives strong control wherever you shoot. You can master your composure with features like:

  • 1500 Nits Brightness: Super clear visibility under bright sunlight
  • Pyro Ecosystem: Works effortlessly with Pyro S, Pyro H, and Pyro 7 
  • Advanced Image Analysis: False Color, Zebra Pattern, and loads of other tools for image accuracy
  • 3D LUT: Offers 5 built-in LUTs for consistent color-grading

Advanced Workflow: Monitoring Exposure Remotely

When it comes to serious filmmaking, you will rarely find the Director of Photography (DP) or Gaffer standing behind the camera. Instead, they view the recordings from a Video Village. This is a separate area with high-end, color-calibrated monitors so they can focus on exposure and lighting while the operator handles composition. And to make it all work, the camera’s feed needs to go to a bigger screen to use False Color without getting in the operator’s way.

The Wireless Link: Why Latency Matters

When you are adjusting lights based on a monitor reading, speed is everything. If there is a lag between the camera sensor and your monitor, you might over-adjust a dimmer or miss a lighting cue.

This is where a robust wireless video transmission system becomes a critical exposure tool.

  • Real-Time Feedback: You need a system with ultra-low latency. If the camera pans into a bright window, you need to see the False Color shift to “Red” (clipping) instantly.
  • Signal Integrity: The wireless feed must be clean and stable. Artifacts or signal drops can be mistaken for image noise or focus issues.

Another Solution: Hollyland Pyro S

For crews moving beyond HDMI cables, systems like the Hollyland Pyro S are industry standards for establishing this link.

  • Hollyland Pyro S: Ideal for multi-person crews, this system utilizes Auto Dual-Band Hopping (ADH) technology that automatically switches between 2.4GHz and 5GHz. This ensures that even on sets crowded with other wireless signals, the video feed remains stable. Plus, this single wireless transmitter sends signals to 4 receivers simultaneously, supporting up to 4K/30fps image quality. It allows the DP to view the footage on a dedicated monitor while the director or client watches on a separate tablet or screen at the same time.

How to Set It Up

  1. Clean Feed: Set your camera to output a “Clean Feed” (no internal overlays) via HDMI/SDI to the Hollyland transmitter.
  2. Transmission: The Hollyland Pyro S sends the signal wirelessly to the receiver at Video Village.
  3. Analysis: Connect the receiver to a large production monitor.
  4. Exposure Check: Activate the False Color tool on the production monitor, not the camera.

This method allows the DP to toggle False Color on and off to check skin tones (Pink/Green) and highlights (Yellow/Red) independently, leaving the camera operator’s view uncluttered for framing and focus.

Pro Tips for Shooting Log vs. Rec.709 with False Color

Filmmakers often run into this when switching from standard profiles to Log. If you don’t understand how your monitor handles False Color with LUTs, you might ruin your exposure. Keep in mind that False Color shows the brightness of the video, but it looks very different on a flat Log image compared to a Rec.709 image with a LUT.

The “Log Squeeze” Effect

Log profiles (like S-Log3, C-Log, or V-Log) are designed to maximize dynamic range by flattening the image. This compresses the luminance values on the IRE scale.

  • In Rec.709: Middle Grey is typically found around 40–50 IRE (often Green or Pink on charts).
  • In Log: Middle Grey shifts downwards significantly. For example, Sony’s S-Log3 places Middle Grey at 41%, while Canon C-Log3 places it around 35%.

If you try to expose a Log image using standard Rec.709 False Color targets (looking for Pink skin tones at 50-60 IRE), you will likely overexpose your footage by several stops.

The Golden Rule: Measure the Sensor, Not the LUT

Many external monitors let you add a 3D LUT to see the final footage. However, the important thing is to know whether False Color is reading the raw Log signal (input) or the LUT signal (output).

1. Monitoring Post-LUT (The Danger Zone)

If your False Color tool reads the signal after the LUT is applied, you are measuring the color grade, not the actual data the sensor is capturing.

  • Risk: A high-contrast LUT might crush shadow detail to 0 IRE or clip highlights to 100 IRE on the monitor, making you think you’ve lost data. In reality, the Log file still has plenty of information in those areas.
  • When to use it: Only use this to check lighting ratios (contrast) for the final look.

2. Monitoring Pre-LUT (The Safe Zone)

For technical accuracy, you should always check False Color on the flat Log image.

  • Benefit: This tells you exactly what the sensor is recording. If the Log signal hits Red (Clipping), that data is truly gone.
  • Workflow: Toggle your LUT OFF to check exposure with False Color, adjust your aperture/lighting, and then toggle the LUT back ON for composition.

Pro Gear Tip: High-end monitors like the Hollyland Pyro 5often allow you to configure whether exposure tools analyze the input source or the LUT output. Always verify this setting in your menu before the shoot begins to ensure you aren’t making decisions based on false data.

Frequently Asked Questions (FAQ)

What is the correct IRE for skin tones?

There is no single “perfect” number, but generally, human skin tones fall between 40 IRE and 70 IRE.

  • Lighter Skin Tones: Typically sit between 60-70 IRE. On many false color charts, this often registers as Pink or Light Grey.
  • Darker Skin Tones: Typically sit between 40-55 IRE. This usually registers as Green or Dark Grey.
  • The Golden Rule: Consistency matters more than the specific number. Once you determine where your subject’s skin looks best on your specific sensor, aim to hit that same IRE value for every shot in the scene.

Should I expose for Pink or Green?

This depends entirely on the color map your monitor uses (e.g., ARRI, RED, or Hollyland standard) and the subject you are filming.

  • Pink (Usually ~70 IRE): Often indicates caucasian skin highlights or one stop over 18% grey. If you are filming a subject with pale skin, seeing Pink on the cheekbones is usually a sign of perfect exposure.
  • Green (Usually ~40 IRE): Often represents 18% Grey or darker skin tones. If you are filming a subject with deep skin tones, you generally want their key-lit side to read Green.

Important: Always check the color key (legend) on your specific monitor, such as the Mars M1 Enhanced, as manufacturers may use slightly different palettes.

Can I use False Color in post-production?

Yes. While False Color is primarily a production tool for capturing data, it is excellent for shot matching in editing software like DaVinci Resolve or Adobe Premiere Pro.

  • Auditing Footage: Turn on False Color in your grading suite to see if your highlights are actually clipped or just bright.
  • Consistency: If you are matching two cameras, use False Color to ensure the skin tones in Shot A generate the same color value as the skin tones in Shot B.

Why does my False Color look different on different monitors?

If your readings look wrong, it is usually due to one of two reasons:

  1. Manufacturer Standards: Different brands map IRE values to different colors. ARRI False Color is different from RED False Color. Stick to one standard to build muscle memory.
  2. Log vs. LUT: This is the most common error. If you apply False Color to a flat Log signal, the image will look muted (mostly grey/green) because the data is compressed. If you apply it after a Rec.709 LUT, the contrast stretches, and you will see more Red (clipping) and Purple (crushed blacks). Always know if your monitor is analyzing the Source (Log) or the Look (LUT).

Conclusion: Stop Guessing, Start Exposing

Using your camera screen or eyes to check exposure is not reliable. Ambient light, screen brightness, and eye strain can make you think a shot is fine when it’s under or overexposed. But guess what? False Color fixes this. It turns exposure into something you can see clearly. By showing brightness as colors, you know your skin looks good, shadows keep detail, and highlights are safe. Nevertheless, to use it properly, you need an external monitor that tells the truth.

]]>
Enhancing Online Meetings at QCAP Lab with Lyra https://www.hollyland.com/blog/application/enhancing-online-meetings-at-qcap-lab-with-lyra Fri, 16 Jan 2026 07:03:52 +0000 https://www.hollyland.com/?p=503010 ]]> Meet Ahmed Shaaban and His Team

Ahmed Shaaban is based in Egypt and works alongside his partners as part of the IT team at the QCAP Lab. The team is responsible for managing and supporting online meetings for the lab director, ensuring smooth communication and professional presentation during internal and external sessions.

Testing Background

At the QCAP Training Center, Ahmed and two colleagues conducted a hands-on test of Lyra during an internal Zoom meeting.

  • Room size: Approximately 100 square meters
  • Test duration: Around 15 minutes
  • Setup: Lyra software installed and fully configured

The test was carried out in a real meeting scenario to closely reflect their daily usage conditions.

Key Challenges Addressed

A More Professional and Engaging Meeting Experience

One of the standout features for the team was Auto Framing. It automatically adjusts the composition during the meeting, making the session look more dynamic and professional, rather than static and monotonous.

Ahmed noted that this was especially appreciated by their manager, who values looking polished on camera. The face-enhancing effect helped improve on-screen appearance in a natural way, increasing overall satisfaction — particularly for female participants.

Noticeably Improved Image Quality

The team had been using a Logitech webcam prior to testing Lyra. During the comparison, they immediately noticed a visible improvement in image quality.

Clear Voice Capture Without Constraints

Audio quality was another area where Lyra stood out.

  • Voices were clear and natural
  • No need to lean close to the camera to be heard
  • The microphone design allowed participants to speak freely and comfortably

This was particularly important in a larger meeting room environment.

Team Feedback

The test session received positive feedback from the entire team:

  • Ahmed: “The auto framing is very interesting.”
  • Hussam: “The quality deserves the price.”
  • Dongo: “Clear image, clear voice is what I need — and that’s exactly what I got from Lyra.”
  • Kimo: “Lyra’s quality is better than my integrated Mac camera.”

Wrapping Up

For Ahmed and his team at QCAP Lab, Lyra proved to be a reliable solution that significantly enhanced both video and audio quality during online meetings. From intelligent framing and natural image enhancement to clear voice pickup, Lyra helped elevate their meetings to a more professional standard — without adding complexity to their workflow.

]]>
Staying Camera-Ready at Home: Lyra Brings Visual Consistency to Gaming Streams https://www.hollyland.com/blog/application/staying-camera-ready-at-home-lyra-brings-visual-consistency-to-gaming-streams Wed, 14 Jan 2026 07:37:38 +0000 https://www.hollyland.com/?p=502596 ]]> Hollyland’s Lyra webcam brings consistent image quality and on-camera confidence to solo game streaming setups, helping creators deliver polished live content even in low-light, at-home environments.

“Hollyland makes streaming and recording my gameplay so much smoother. No lag, no drops. Everything just works.”

Meet Ladysue Alberto

Based in the Philippines, Ladysue Alberto is a content creator focused on game streaming for YouTube. Working independently from her personal gaming room, Ladysue handles every aspect of her setup herself, from camera framing and lighting to live interaction with her audience.

Like many solo streamers, her content is produced in a home gaming room, where lighting conditions and visual consistency are essential across sessions. For her channel, Ladysue chose the Hollyland Lyra webcam to improve visual consistency and reduce setup friction without interrupting her creative flow.

Project & Setup

The project consisted of regular game streaming sessions recorded and broadcast from Ladysue’s personal gaming room for her YouTube channel. As a one-person production, the setup needed to remain simple, stable, and reliable throughout extended streams.

The low-light environment posed common challenges for webcams, including reduced facial visibility, inconsistent focus, and uneven image quality. Lyra was integrated into her desk-based streaming setup to deliver consistent visuals while minimizing the need for manual adjustment.

With Lyra handling image capture, Ladysue could focus on gameplay and audience engagement rather than technical monitoring.

Solving Real Creator Workflow Challenges

One of the most immediate improvements came from Lyra’s ability to maintain clear, balanced visuals in low-light conditions. Even with minimal ambient lighting, the webcam preserved facial detail and on-screen presence, helping Ladysue stay visually connected with her audience.

Features such as AI tuning, skin beauty, and instant autofocus worked together to keep her image natural and stable as she moved during gameplay. This reduced the need for constant repositioning or manual camera control.

“Setting it up is so simple. I can focus on my content and my audience without worrying about technical issues during streams or recordings.”

Equally important was reliability. During live streams, Lyra delivered stable performance with no noticeable lag or interruptions, allowing Ladysue to stream confidently without breaking immersion.

For collaborative or multi-participant streams, Lyra also ensured clear visual communication across different setups.

“Hollyland products make collaborative streams and gameplay recording effortless. Everyone hears and sees everything clearly, no matter the setup.”

Takeaways

For Ladysue Alberto, the Hollyland Lyra webcam proved to be a dependable visual solution for solo game streaming. By combining 4K image quality, intelligent AI tuning, natural skin enhancement, and fast autofocus, Lyra elevated the overall look of her streams without adding technical complexity.

Designed for creators working from home studios and gaming rooms, Lyra supports a smoother, more confident streaming experience, allowing creators to focus less on setup and more on real-time connection with their audience.

“With Hollyland Lyra, I can record high-quality gameplay while streaming live, and the visuals stay clear and consistent the whole time.”

]]>
How to Stream Roblox on a Laptop? https://www.hollyland.com/blog/tips/stream-roblox-on-a-laptop Tue, 13 Jan 2026 07:08:04 +0000 https://www.hollyland.com/?p=501963 ]]> People love Roblox! And we’re not claiming it on our own. Stats show that Roblox had 79.5 million active users in the 2nd Quarter of 2025. Many players create unique worlds and share them with excited fans. Some creators even earn money by trading rare and valuable items. One creator named SonOfSevenless owns collectibles worth over one million. 

This article teaches you how to stream Roblox using a laptop and a streaming platform like Discord. You will learn steps that work on both Windows and Mac laptops.

Method 1: How to Stream Roblox on a Laptop when Playing via the Roblox App

  • Run Roblox on your laptop and launch the Discord software.

To make sure your camera feed is visible during the livestream, check if the camera and mic are enabled.

  • Click on the “User Settings” icon in the left panel. 
  • Scroll to the “Voice & Video” option from the left-side menu.
  • In the Input Mode, choose “Voice Activity” and ensure the toggle is on for “Automatically determine input sensitivity.” 
  • On the same page, go to the Video Settings section. Click the drop-down list under the Camera tab, and select your camera. 

After selecting your camera, click “Test Video.”

Here, you can set up features like AI tuning and 4K Cinematic image quality, which cameras such as the Hollyland VenusLiv Air offer before you go live.

  • Now, return to the main Discord screen, click the “User Settings” icon, and select the “Registered Games” option from the left-side menu. 

Important: Roblox must be running, and the app shouldn’t be closed. 

  • Click the “Add it!” option. Search for Roblox (or click the downward arrow to open the app and choose Roblox), and hit “Add Game.” 

On the main screen, Roblox will appear with the option to stream. 

  • Click the “Stream Roblox” icon to open the Screen Share window. 
  • Select the Voice Channel and Streaming Quality, such as 720p resolution and 30 frame rate, and click “Go Live.” 

If you have  Discord Nitro, you can stream at a higher resolution and frame rate.

  • Return to the main screen, and under the Game Server section, and the VOICE CHANNELS tab, click “Gaming.”  

Important: The Voice channel names can be different, depending on your preference.

  • Once you click Gaming, the preview window for the Roblox game will appear on Discord’s interface. Double-click to maximize it and enjoy streaming. 
  • You can also end Roblox streaming on Discord by clicking the small Cancel icon under the Game Server section. 

Method 2: How to Stream Roblox on a Laptop without Downloading the Game?

There are several sites that let you play Roblox online from your browser and without downloading the Roblox Studio or app from the official site, Windows Store, or App Store. For this method, we have used the EasyFun.gg so you can understand the process. The core steps may apply to all platforms offering the online playing service. 

  • Go to easyfun.gg website from your Chrome browser (or any browser of your choice) and type “Roblox” in the search option
  • Click “Play Now” and allow the game to load. 

Once the loading is complete, log in or sign up by clicking the profile icon. Or, else you will need to wait in the queue of online players.

  • When the game starts, open Discord and go to “User Settings.” 
  • Select the “Registered Games” option.
  • Click “Add it!” and choose “Google Chrome” from the drop-down list. 

Why Choose Chrome? Because the source is now your browser, as you’re playing without downloading Roblox to your laptop’s hard drive.

  • Then click “Add Game.

A notification will appear on Discord that “Google Chrome” is now playing. 

  • Go back to Discord’s main screen and click the “Stream Google Chrome” icon.
  • Select the Voice Channel and Stream Quality, and click “Go Live.”
  • Double-click the preview window to maximize the view of your livestream. 

Wrap Up

You wanted to stream Roblox from your laptop, and now you can. First, you can stream after downloading Roblox directly to your system. Second, you can stream while playing through a browser without problems. Discord makes both ways simple and keeps your live session smooth for friends to watch.

FAQs

1. How much money can you make as a Roblox streamer in the initial days?

It depends on how many players are attracted to your created games on Roblox. With about 1K active players, you might make around $10,000 to $50,000 (monthly estimation). The exact income depends on the way you’re promoting the game and the items available to offer. Keeping players engaged helps earnings remain steady over time.

2. How to adjust the Roblox screen on a laptop?

To change graphics in Roblox, press ESC or click the top-left logo. Open the Settings tab at the top of the menu to see options. Automatic mode lets Roblox manage graphics quality for you. Manual mode allows moving the slider right for better graphics or left for smoother performance. Press ESC or Resume to return to your game after adjusting.

3. Why is Roblox showing a white screen when I stream on Discord?

A white screen usually appears because graphics drivers are outdated or incompatible. High in-game settings or recent updates to Roblox or Discord may also cause conflicts. Restarting both apps often fixes the problem. Updating drivers and lowering graphics settings normally makes streaming work smoothly again.

]]>
How to Schedule A YouTube Streaming Video on PC and Phone? https://www.hollyland.com/blog/tips/schedule-a-youtube-streaming-video-on-pc-and-phone Tue, 13 Jan 2026 07:03:21 +0000 https://www.hollyland.com/?p=501962 ]]> Thousands of creators go live on YouTube every single day. This constant flow makes visibility difficult for new or growing channels. A scheduled stream removes confusion and supports steady audience engagement online. It also keeps the creator free from last-minute technical stress. At the same time, proper planning strengthens each session and improves overall content quality. 

This article explains how to schedule a smooth YouTube streaming video easily. It covers simple steps that you can follow using your smartphone or laptop.

Method 1: Schedule Your YouTube Streaming Video via Smartphone

  • Launch the YouTube app on your Android mobile or iPhone. 
  • Sign in to you’re account. 
  • Tap the “+” button at the bottom of the app’s screen. 

By default, the “Live” option should be highlighted when the next screen appears. If not, make sure to select it by tapping once. 

  • Before scheduling the time and date, tap on the mobile icon in the right area of the screen. And choose your preferred streaming mode (vertically or horizontally). 
  • Tap on the ‘Calendar” icon in the top-right corner.

As you haven’t scheduled any upcoming livestreams, the screen should appear blank.

  • After this confirmation, go back to the “Live” option and tap on the small “pencil-like” icon at the bottom-right area of the screen to edit details of your upcoming stream. 
  • Write a suitable title and tap on the “Visibility” option.
  • Choose your preferred visibility setting from the available options – Public, Members only, and Unlisted. 
  • After it’s done, return to the main edit screen, scroll down to the “Schedule for later” option, and tap it to enable the toggle. 
  • Set the “Date” on which you want to schedule your streaming video and tap “OK.” 
  • Then, select the “Time” and tap the “OK” option again. 
  • Tap the “Next” button. 
  • Select your “Audience” from the given options, and tap the “Next” button again. 
  • A confirmation screen will appear with the scheduled date and time. You can edit it by tapping the “Pencil” icon, or simply tap “Done” if you don’t want to make any changes. 
  • To re-check, tap on the “Calendar” icon, and the scheduled livestream should show up. 

You can leave it be or tap the “Bin” icon if you want to delete it. 

Method 2: Schedule Your YouTube Streaming Video using a Computer

  • Go to YouTube.com and sign in to your account.
  • Click on your profile image and choose YouTube Studio.

You will be taken to your “Channel Dashboard.

  • Click on the “Go Live” icon in the top-right corner of the screen.
  • On the next page, click on the “Schedule Stream” button next to the Manage streams tab.
  • Choose the “Create new” option.

Important: You can use the “Reuse settings” option after scheduling and going live on your first stream. This option saves you time and effort, as it automatically applies the same settings of your previous livestreams to the newly scheduled ones.

  • Now, enter the title and description, and scroll down to configure other details, such as thumbnail and category. Then, click “Next.” 
  • Fill in the monetization details (if required) and click “Next.” 
  • In the “Customization” section, you can enable or disable options, like Live Chat, Participant modes, and Reactions. Once done, click the “Next” button to set Visibility. 
  • Choose who can see your stream by selecting options, like Private, Unlisted, Members only, or Public.  
  • Then, set a future date and time you want to schedule the video streaming on YouTube, and click “Done.”

At this stage, you have successfully scheduled your upcoming stream. To reconfirm:

  • Click the “YouTube Studio” logo at the top of the screen.
  • Select “Content” from the options on the left side of the screen. 
  • Click on the “Live” tab.

All your scheduled streams will appear in the “Upcoming” section.

  • When you’re about to go live on YouTube on your scheduled date and time, make sure you have your streaming equipment tested and ready. A streaming camera like the Hollyland VenusLiv Air can help you overcome technical obstacles. It’s a plug-and-play device with an AI-Tuning feature and 24/7 live streaming capability without getting heated.

Wrap Up

Scheduling a YouTube stream can start on iPhones, Androids, or tablets. Desktop or laptop scheduling is simple too, with no extra steps needed. However, Mac or Windows systems give a magnified and detailed view of the YouTube Studio dashboard. This makes adjusting settings easier and safer. But phones still work well for fast scheduling on the go.

FAQs

1. Can you pre-record lives on YouTube?

Yes, YouTube allows streaming pre-recorded videos as live events. This helps creators share polished content while keeping viewers engaged. Tools like Gyre or Castr make it easy to broadcast pre-recorded videos smoothly without appearing offline.

2. How to run a 24/7 live stream on YouTube?

Streaming software like OBS or Streamlabs can loop videos automatically. Services such as Restream.io’s “Always On” feature keep streams running nonstop. This allows channels to stay live around the clock without needing constant attention or manual control.

3. What are the requirements for scheduling a YouTube streaming video?

Your channel needs at least 50 subscribers and no recent live restrictions. Verification and live streaming must be enabled. First-time streams may require waiting 24 hours before starting, ensuring everything is ready for smooth streaming.

]]>
How to Stream The Challenge 40 Online? https://www.hollyland.com/blog/tips/stream-the-challenge-40-online Tue, 13 Jan 2026 06:59:18 +0000 https://www.hollyland.com/?p=501961 ]]> The Challenge pulls viewers in with strong moments and bold energy. Season 40 became popular because every episode brings sharp tension and surprises. The teams fight hard, and every choice changes the game quickly. Twists like the Karma Vote keep fans watching with real interest. Many people now search for safe platforms to stream this season. This guide lists trusted services so you can watch without worry. 

You can also create reaction videos after watching the show during your livestream. The Hollyland VenusLiv Air helps you stream reactions with smooth control. It offers an AI Tuning feature for better picture balance. It supports 24/7 continuous streaming in clear 4K quality.

Where to Watch The Challenge 40: Battle of the Eras

1. Paramount Plus (Paramount+)

Paramount Plus works as the main streaming home for MTV shows. Many seasons of The Challenge are easy to watch on demand. New episodes also appear shortly after airing on MTV. This helps fans keep up with the season without stress. The Premium plan offers more comfort for viewers who want extra content. 

It includes SHOWTIME titles, popular originals, movies, and live sports. Most videos play without ads, except during live TV. The plan costs $12.99 each month, which feels fair for steady access. Many fans choose it because the service keeps streaming simply and smoothly.

2. Fubo

Fubo is a popular choice for people who enjoy sports content. The service carries many channels that cover regional and international leagues. This gives viewers steady access to different games throughout the year. The platform also offers extra add-ons for those wanting more options. These add-ons increase the monthly cost but expand available channels. 

The starter price is $9.99 for the first month. The fee then changes to $14.99 each month after that. Some plans follow a three-month billing cycle at $34.99. This setup helps fans pick the plan that matches their needs.

3. Hulu + Live TV

Hulu + Live TV makes watching The Challenge Season 40 simple. The service carries MTV, so fans can stream new episodes trouble-free. It also offers a 3-day free trial for new users. After the trial, the monthly price starts at $90

The plan includes over 95 popular channels from cable and broadcast networks. This gives viewers steady access to sports, news, and entertainment. Hulu + Live TV also provides DVR support for saving episodes. Users get Disney+ and ESPN access at no extra cost.

4. Philo

Philo is a cheap and simple option for streaming shows. It costs $33 each month, which feels very easy to afford. The service includes 70+ channels, and MTV is one of them. This means fans can watch The Challenge 40 without paying extra. 

Philo does not offer local channels or popular sports networks. Even with these limits, many people still choose it for its low price. The service works well for people who watch mainly entertainment. Philo is perfect for budget-conscious fans who want a reliable streaming option.

5. DirecTV Stream

DirecTV Stream is a strong choice for streaming The Challenge 40. MTV is included in many of its available plans. The MyEntertainment plan costs $35 per month and includes Disney+ and HBO Max. It also offers popular channels like Bravo, Discovery, FX, and TLC. MTV appears in all of the main Signature Packages for easy access. 

The Signature Packages start at $49.99 per month with fees for the first month. Higher plans offer more channels and extra features for fans who want them.

6. Prime Video

Prime Video comes with an Amazon Prime membership for easy streaming. The service offers many movies and shows that people often watch. Many fans also use Prime Video to watch The Challenge 40 online

It works on phones, tablets, smart TVs, and laptops. Members can open the app and watch without extra charges. The plan costs $5.99 each month and can be canceled anytime.

Wrap Up

The Challenge Season 40 brings steady excitement and fresh surprises every episode. Fans can watch older seasons on several trusted streaming platforms. New episodes appear after airing on MTV through supported services. Keep checking your preferred platform for updates and enjoy each moment.

FAQs

1. Can The Challenge be streamed for free online?

Some services offer limited free trials for new viewers. Fans can check DirecTV Stream, Philo, or Fubo for short trial periods. These trials allow people to watch recent episodes as they appear online. After the trial, a paid plan is required for continued access.

2. Does Paramount Plus include MTV content?

Paramount Plus offers a wide mix of shows from different networks. The platform carries full episodes from MTV, BET, CBS, and other major channels. This gives subscribers many viewing choices on one service. Fans can stream new and older MTV shows with ease.

3. How many episodes are in Season 40?

The Challenge Season 40 includes a total of twenty-one episodes. This count also includes a reunion presented in two parts. Fans can follow every moment from the opening challenge to the final results. The full season offers plenty of content for returning viewers.

4. Is The Challenge completely unscripted?

Many fans believe the show follows real reactions and real gameplay. Some editing may shape scenes to create smoother storylines. Viewers often debate whether producers guide certain decisions. A few cast members seem protected at times, which fuels ongoing questions about fairness.

]]>
How to Improve Streaming Quality (Core Settings Guide) https://www.hollyland.com/blog/improve-streaming-quality Tue, 13 Jan 2026 06:57:00 +0000 https://www.hollyland.com/?p=501960 ]]> If your Twitch or YouTube stream keeps looking blurry, laggy, or pixelated, especially during fast gameplay. You’re not alone. Most creators struggle with dropped frames, bad bitrate, grainy webcams, and inconsistent quality across scenes.

The good news? You can upgrade your stream with a few targeted adjustments and settings. This beginner-friendly guide walks you through every essential method step-by-step, with references to help you follow along easily.

What Factors Affect Your Streaming Quality?

Before adjusting settings, it helps to understand what factors impact your stream quality the most:

  • Your internet upload speed
  • Bitrate settings in OBS/Streamlabs
  • Encoder choice (software vs hardware)
  • Resolution & framerate
  • Network stability
  • Camera quality
  • CPU/GPU workload
  • Scene & source optimization

Fixing even 2-3 of these can drastically boost your output quality. Now that you understand the key components behind clear streaming output, let’s begin optimizing each one inside OBS Studio.

How to Improve Streaming Quality in OBS Studio

Let’s start with the most essential core settings inside OBS and work our way outward to your hardware, camera, and network. OBS Studio controls most of your core quality settings, so optimizing these parameters is the fastest way to reduce blur and pixelation.

Step #1 – Optimize OBS Output Settings for Best Quality

These settings directly influence encoding performance, clarity, and bitrate stability, so follow them carefully.

  • Go to: OBS → Settings in the bottom right corner.
  • Click on Stream  → ensure to uncheck the ‘Enable Enhanced Broadcasting’  → check the ‘Ignore streaming service setting management’.
  • This will allow us to make changes manually.
  • Then click on the Apply button.
  • Next, go to Output → Output Mode → Advanced.
  • Now, in the streaming settings, choose:
    • Audio track  → 1, Audio Encoder  → FFmpeg AAC, Twitch VOD Track  → 6 to add copyright-free music to streams.
  • If You Have an NVIDIA GPU → Select NVENC (New)
    • Uses GPU for encoding
    • Better quality at the same bitrate
    • Lower CPU usage
  • If You Have an AMD GPU → Select H.264/AVC or AMF
  • If You Have No GPU → Use x264
  • Now, in the Encoder Settings, set the Rate Control → CBR.
    • For streaming → choose CBR (Constant Bitrate)
    • For recording → choose CQP for sharper local files
  • Next, set the Proper Bitrate, recommended Bitrate → 8000Kbps.

You can use this bitrate guide:

ResolutionFramerateRecommended Bitrate
720p30 fps2500-3500 kbps
720p60 fps3500-4500 kbps
1080p30 fps4500-6000 kbps
1080p60 fps6000 kbps or above

Check your upload speed first on speedtest.net, keeping the bitrate at or below 70% of your upload.

  • Next, enable Keyframe Interval = 2. This is required by Twitch and YouTube for stable playback.
  • For the Preset, you can have anywhere between P5 and P7, depending on your computer. 
  • Next, set up Tuning → High Quality, Multipass Mode Two Passes, Profile high, GPU → 0, Max B-frames 2.
  • Go to the Audio menu and set the Audio Bitrate 320 for all the tracks.
  • Finally, hit the Apply button.

Now that your core encoding and bitrate settings are optimized, let’s fix your resolution and framerate.

Step #2 – Improve Base & Output Resolution (Scaling for Quality)

OBS scaling settings significantly impact sharpness.

  • Go to: Settings Video and Set Base Resolution (Your Monitor Resolution) and Output (Scaled) Resolution to 1920×1080
  • If your stream lags, scale down to:
    • 1600×900
    • 1280×720
    • 720p60 often looks better than 1080p30 if your bitrate is limited.
  • Choose the Best Downscale Filter. Select: ✔ Lanczos (36 samples). This gives the sharpest results. 
  • If your resolutions match, then no need for downscaling.
  • Next, hit the Apply button.
  • Now, go to the Advanced tab, find Automatically reconnect is enabled. This is helpful if your stream gets disconnected.
  • Set Retry Delay → 2s and Maximum Retries →  25. Then hit the Apply button.

Once resolution and scaling are configured, the next biggest improvement comes from adjusting your camera setup.

STEP #3 – Improve Webcam Quality in OBS 

Even a good webcam looks bad without proper configuration. We will now manually configure camera settings.

  • In the sources, right-click your camera → Properties
  • Deactivate and again activate to just refresh your camera before optimizing settings.
  • Select the Resolution type Custom.
  • Now, Increase Resolution & Framerate. Higher framerate = smoother talking & movement. Set camera output to:
  • 1080p 30fps or
  • 720p 60fps

If you want smoother, higher-quality streaming and stable backups across multiple scenes, the Hollyland VenusLiv Air integrates naturally into your workflow. It supports clean audio capture, high-resolution output, and 24/7 stable streaming.

  • Next, set buffering to Auto detect and click OK.
  • Now again go to sources, right-click your camera → Properties → Configure Video
  • Change the given settings and click Apply.
    • Exposure: Manual
    • Gain: Low (reduces noise)
    • White balance: Manual
    • Sharpness, Brightness: Adjust moderately

How to Improve Streaming Quality in Streamlabs

Streamlabs follows similar principles to OBS, but the interface differs slightly.

  • Go to Settings → Output → Output Mode → Advanced.

Now, as we did for OBS, we can similarly set all the settings for Streamlabs too in the same manner.

Additionally, you can follow these steps to improve your setup for streaming outside the applications.

  1. Use good lighting. It improves the quality more than the camera itself. You can use softbox, ring light, and LED Panels.
  2. Improve Internet Stability for Higher Stream Quality. You can prefer Ethernet over Wi-Fi. Ethernet eliminates drops and reduces latency significantly.
  3. Enable QoS on Your Router. This prioritizes streaming/video traffic so your bitrate stays stable.
  4. Avoid Streaming During Peak Hours. Upload speeds often fluctuate; test speed before going live.
  5. Close Background Apps. These steal bandwidth and cause bitrate drops

Conclusion

Improving your streaming quality doesn’t require expensive upgrades. Just the right settings. By optimizing your encoder, bitrate, resolution, camera, and network stability, you can eliminate lag and produce sharper, more professional streams. 

FAQs

Q1. Why is my stream blurry even though my internet is good?

This usually happens because your bitrate is too low for your chosen resolution. Lower your stream resolution to 720p60 or increase your bitrate if your upload speed allows it. Also, check OBS downscale filters and enable Lanczos for maximum sharpness.

Q2. What encoder should I use for the best stream quality?

Use NVENC (New) if you have an NVIDIA card. It delivers better results than x264 with less CPU usage. AMD users can choose AMF, while CPU-only users should stick to x264 at “veryfast” or “superfast” presets.

Q3. Why does my webcam look grainy?

Grain happens when the gain or ISO is too high. Lower gain, improve lighting, and set your webcam to manual exposure/white balance. Also, ensure your webcam is outputting at 1080p or 720p at the correct framerate.

Q4. How do I stop dropped frames while streaming?

Dropped frames are caused by network instability. Use Ethernet, reduce bitrate, remove bandwidth-heavy apps, and enable QoS on your router. OBS logs can also show if drops are network or rendering-related.

Q5. Do I need hardware upgrades to improve quality?

Not always. Optimizing OBS and internet settings often makes the biggest difference. But upgrading your GPU, capture card, or using stable streaming hardware can boost both quality and reliability.

]]>
How to Add Music to Your Twitch Stream (Step-by-Step) https://www.hollyland.com/blog/tips/add-music-to-your-twitch-stream-2 Tue, 13 Jan 2026 06:43:53 +0000 https://www.hollyland.com/?p=501959 ]]> Suppose you’ve ever gone live on Twitch and felt like something was missing. Your gameplay is solid, your mic sounds good, but the vibe is still a little empty. Music is usually the missing piece. But for many streamers, adding music becomes frustrating fast. Copyright warnings, muted VODs, messy audio settings, Spotify not being picked up, or the fear of “doing it wrong” stop creators from using music altogether.

This guide breaks everything down into clear, beginner-friendly steps so you can safely add music from Spotify, YouTube, local files, and other sources. 

What You Need Before Adding Music to Your Twitch Stream

Before we jump into the methods, make sure you understand how Twitch handles music and how your streaming software processes audio.

Check Music Permissions

Twitch does not allow streaming copyrighted songs that you don’t own the rights to. Even if the music plays fine during the live broadcast, your VOD may get muted afterwards. Before choosing any music source, ensure you’re using:

  • Royalty-free music
  • Music you own or licensed
  • Soundtrack by Twitch
  • Independent artist-licensed tracks
  • Streamer-friendly playlist platforms

How to Add Music to a Twitch Stream Using Local Audio Files

To add local music files to a Twitch Studio stream, you’ll need to use a different application since Twitch Studio does not have a dedicated media source for local files. Local audio files (MP3, WAV, OGG) are ideal because they give you full control and no copyright problems if you legally own or have licensed them.

Option I – Adding Local Music Files in OBS Studio

  • Now, click on VLC Video Source.
  • Name the source (e.g., Music), and click OK.
  • Set the desired options for your playlist.
  • Click on the + icon.
  • Select the Add Files option.
  • Choose your MP3/WAV file playlist and click Open.
  • Check Local File and click OK.
  • The Music will be added to the Audio Mixer. You can use it to lower music volume and ensure it doesn’t overpower your microphone.

Option 2 – Adding Local Music Files in Streamlabs Desktop

If you are streaming with Streamlabs, here’s the quick method to add local music to your Twitch streams.

  • Open Streamlabs Desktop and go to Sources.
  • Click on Media file → Add Source.
  • Scroll and select your music.  If you want to add a new source instead, toggle that and click Add source.
  • You can see the Music added in the Mixer bar. Adjust the volume levels and hit the Go Live button.

How to Add Spotify or YouTube Music to Your Twitch Stream

Spotify and YouTube are common music sources, but remember, they are not licensed for VODs. These are popular but must be added carefully due to copyright rules. Below, we have discussed the steps to add Spotify music to Twitch streams. Similar methods can be followed to add YouTube music too.

I. Using Spotify to add Music to Twitch through OBS Studio

You can use Spotify in two ways:

  • Through a web browser (Chrome, Firefox, Edge, etc.)
  • Through the Spotify desktop application

Either option works for capturing audio inside OBS.

Option # A – Spotify through Web Browser

  • Open Spotify through your browser(say Chrome). Once your music source is ready, move back to your OBS Desktop window.

Method #1-  Capture Spotify Using Desktop Audio

  • Check if Desktop Audio moves in the OBS Audio Mixer
  • If you do not see the Desktop audio, click on Settings in the right corner.
  • Next, click Audio choose the device that plays your system audio (e.g., Speakers, Headphones, Realtek, etc.) → click OK.
  • When you play a song, OBS will start detecting it through Desktop Audio. You’ll see the audio levels move inside the Audio Mixer.

This confirms the sound is being captured for your stream. This is the simplest method because OBS automatically captures whatever plays on your computer.

Method #2 – Isolate Spotify Using Application Audio Capture

If you want a more direct and controlled way to add music to your Twitch streams. Now we move to the method where you do not use Desktop Audio.

This lets you isolate only Spotify audio and prevent other desktop sounds from being captured.

  • At the bottom of OBS, click the + button in the Sources panel.
  • Choose the Application Audio Capture (BETA) option.
  • Rename the source (example: Spotify Google Chrome, or any browser you are using) and click OK.
  • In the configuration box, go to the Window dropdown, choose the browser tab where Spotify is open, for Example: [Chrome.exe], and click OK.
  • A new audio bar labelled Spotify Google Chrome appears.

This means only Spotify is being captured, and no other browser sound or desktop notifications. This completes the Browser Method using Audio Capture.

Option B –  #Spotify using the Desktop App

This method lets you capture only the Spotify app’s audio, without capturing your entire system sound. It’s perfect if you want clean, isolated music control.

  • Install the Spotify Desktop app.
  • Go to the Sources panel at the bottom of OBS. Click the + (Add) button.
  • Choose Application Audio Capture (BETA) from the list.
  • A naming window will appear. We call this the “Spotify App” and click OK. This helps differentiate between Spotify via browser and Spotify via app.
  • In the Window dropdown, look for the Spotify application.
  • It may appear with different system-level identifiers, such as spotify.exe. Next, click OK.
  • OBS will now isolate and listen only to the Spotify app’s audio. A new audio bar labelled Spotify App will appear in the Audio Mixer.

II. Using Spotify to add Music to Twitch through Streamlabs

If you stream through Streamlabs, follow these steps:

  • Click on the + icon in the Sources menu.
  • Choose Application Audio Capture (BETA) Add Source.
  • Name the source, say ‘Music 2’, → Add Source.
  • If you’re adding multiple audio or video sources or managing different music tracks across scenes. Having a stable, high-quality backup workflow becomes essential. 
  • Devices like the Hollyland VenusLiv Air help with this by supporting 24/7 streaming, high-resolution capture, and clean audio input. Also, ideal for keeping a master recording even when your Twitch VOD excludes certain music tracks.
  • Now choose the music source you want to add.
  • For Spotify, ensure you have Spotify music running in the background through the browser or app.
  • Choose [Spotify.exe] under Windows.
  • “Music 2” now appears in the Mixer. This ensures Spotify music is successfully added to be streamed on Twitch through Streamlabs.

Conclusion

Add music to your Twitch stream confidently by choosing rights-cleared sources, isolating music with application capture, and utilising multi-track routing to ensure VODs remain clean. Use Soundtrack by Twitch or licensed libraries for the safest results. For pro workflows and secure master copies, pair software routing with hardware like the Hollyland VenusLiv Air 2 to archive edit-ready recordings.

FAQs

Q1.  Can I use Soundtrack by Twitch to avoid VOD muting?

Not any longer. Twitch officially shut down the Soundtrack by Twitch service on July 17, 2023. This means you can no longer add new tracks to it. Instead, you’ll need to use other rights-cleared music libraries or ensure routing and licensing compliance manually.

Q2. How do I keep music out of my Twitch VODs while still letting it play live?

Use multi-track routing in your encoder. In OBS (Advanced Output), enable multiple audio tracks, assign mic/game to the stream/VOD track, and assign music to a stream-only track. For Streamlabs, do partial routing. Twitch Studio lacks VOD routing. Test local recordings to confirm the final VOD file has no music.

Q3. How do I stop my VOD from getting muted if I use Spotify for live music?

Mute Spotify from the VOD track by assigning it only to the live audio track using OBS’s Advanced Audio Properties. This allows followers to hear music during the live stream but keeps it out of recordings and VODs. Always test your audio tracks before starting a session.

Q4. How can I allow song requests safely without risking copyright strikes?

Enable requests only from vetted libraries (Soundtrack, Pretzel, Lickd, royalty-free lists). Configure your song-request bot to accept approved sources only, block open Spotify/YouTube requests, and log every played track. Review user submissions before adding them live. This reduces unexpected copyrighted songs being played on stream or saved to VODs.

Q5. Do I need extra hardware for backups and better audio control?

Not necessarily. Most streamers manage music and routing using software tools like OBS or Streamlabs. However, audio interfaces and mixers such as GoXLR, Elgato Wave Link, or Beacn Mix can make balancing channels easier. Hardware isn’t mandatory, but it can improve workflow, consistency, and overall sound quality.

]]>
What Are Audio Meters? The Complete Guide to Monitoring Audio Levels in Video Production https://www.hollyland.com/blog/tips/what-are-audio-meters Thu, 08 Jan 2026 08:02:53 +0000 https://www.hollyland.com/?p=496938 ]]> You’ve likely heard the saying: audiences will forgive grainy video, but they will click away immediately if the audio is bad. But how do you know if your audio is “good” without waiting until you are back in the editing suite? The answer lies in understanding what audio meters are and how to monitor audio levels in video production.

A time comes in every creator’s journey where they trust manual control of audio settings rather than simply kicking the “Auto” levels. Of course, you can’t just do it without knowing what decibels, peaks, and waveforms are. 

Here’s a detailed guide to help you read those green and red bouncing bars so everything starts to make sense. Going through each section will bring you closer to recording a professional and clean sound for your documentaries, filmmaking, and all kinds of video projects.

What is an Audio Meter and Why Does It Matter?

Definition: An audio meter is a graphical display used in video production to visualize the loudness of an audio signal, typically measured in Decibels (dB). It helps creators prevent “clipping” (distortion) and maintain consistent volume levels throughout a recording.

During video production, your ears adjust to the volume without you noticing because that’s how human hearing works. So no wonder you can’t always trust them. Digital meters, on the other hand, give you the real, exact levels.

The Balancing Act: Signal vs. Noise

The main reason to use an audio meter is to keep your sound just right, not too soft and not too loud.

  1. The Noise Floor (Too Quiet): Every piece of electronic gear produces a low-level hiss known as the “noise floor.” If your recording levels are too low, your voice gets buried in this hiss. Later, when you try to increase the volume while editing, the hiss eventually gets louder. As a result, it makes your audio sound noisy and unprofessional.
  2. Clipping (Too Loud): This is the worst-case scenario. If the audio signal exceeds the maximum limit (0dB in digital systems), the top of the sound wave is chopped off. This is called clipping. Unlike a slightly dark video image that can be brightened, clipped audio is permanently distorted and usually cannot be fixed.

By monitoring your meters, you ensure your audio sits comfortably above the noise floor but safely below the clipping point.

Understanding the Scale: dBFS Explained

When you look at the audio meter on your camera, receiver, or editing software, you are looking at a dBFS (Decibels relative to Full Scale) meter.

Remember, digital audio meters are quite different from a speedometer. They go backward with negative numbers since 0dBFS is the absolute ceiling. Whereas in a speedometer, the level starts at 0 and increases.

Think of 0dB as the top rim of a glass:

  • Negative numbers (e.g., -20dB): This represents how much “headroom” or empty space is left in the glass before it spills over.
  • 0dB: The glass is completely full.
  • Above 0dB: The water spills. In digital audio, this results in clipping.

Back in the analog days, you could push audio past 0 for a warm sound. In digital, 0dB is a hard limit. If your audio signal hits it, the waveform tops get cut off, causing sharp, unpleasant distortion.

The “Traffic Light” System

To make monitoring easier, most meters use a color-coded system similar to a traffic light. Understanding these zones is the fastest way to diagnose your audio health at a glance.

ZoneLevel Range (dB)What It MeansAction Required 
Red Zone>-3dB to 0dBDANGER. You’re about to clip. Don’t let the meter go all the way up. If it does, your audio will get ruined.Turn Down Gain. Lower your input levels immediately to prevent distortion.
Yellow Zone-12dB to -6dBTarget Range. This is the “sweet spot” for spoken word and dialogue on YouTube and the web.Maintain. Keep your loudest peaks bouncing in this area.
Green Zone-20dB to -12dBSafe Range. Good for dynamic range and background sounds, but potentially too quiet for the main dialogue.Monitor. If dialogue stays here, you may need to boost it in post-production.
The Floor-60dB to -∞Noise Floor. This is silence or room tone (hiss).Check Noise. If this meter is moving while no one is speaking, your environment is too noisy.

Pro Tip: Do not aim for 0dB. Many beginners try to get the meter as high as possible, but this leaves no room for error. If your subject laughs or shouts unexpectedly, you will clip. Always leave “headroom” by aiming for the Yellow Zone (-12dB to -6dB).

How to Monitor Audio Levels During Recording (Production)

Now that you understand the zones, how do you practically set them before hitting record? Monitoring audio isn’t just about watching bars bounce; it is about Gain Staging. This makes sure your sound stays clean and strong all the way from the mic to your camera.

Here is the step-by-step workflow to ensure pristine audio levels on set.

Step 1: Set Your Pre-amp and Camera Gain

The most common cause of “hiss” in video audio is the camera’s internal pre-amp. Most DSLR and mirrorless cameras have low-quality pre-amps that introduce noise if you turn them up too high.

  • The Rule: Keep your camera’s audio input level low (usually around setting 1 or -20dB on the camera menu).
  • The Adjustment: Use your external mic or receiver for the volume boost. This keeps the noisy camera out of the way and gives you a cleaner “signal-to-noise ratio.

Step 2: Check the Source Meter

Once your camera is set, you need to adjust the output volume on your microphone receiver. Your goal is to get the meters bouncing in the Yellow Zone (-12dB to -6dB) when the subject is speaking at a normal volume.

However, monitoring these levels on a tiny camera screen can be difficult, especially if you are filming yourself or moving around.

The Hardware Solution:

Using a system with built-in visual metering is a game-changer for solo creators. Professional wireless systems, like the Hollyland LARK MAX 2, come with a clear touchscreen display on the receiver unit.

It lets you adjust gain, activate timecode, or switch to channel modes without a sweat. But what makes it top-notch is wireless audio monitoring via OWS earphones. As a result, your subject is safely in the “Yellow Zone” with a single glance at the receiver and listening to live audio feed.

Step 3: The “Safety Track” Method

Even with perfect gain staging, unexpected loud noises (like a laugh or a shout) can cause your audio to spike into the Red Zone and clip. This is where a Safety Track acts as your insurance policy.

A Safety Track records the audio twice simultaneously:

  1. Track A: Recorded at your standard level.
  2. Track B: Recorded at a lower volume (typically -6dB lower).

If your main audio clips during a loud moment, you can simply swap them out for the clean, unclipped audio from the Safety Track during editing. It effectively eliminates the fear of ruining a take due to bad levels.

How to Monitor Audio Levels During Editing (Post-Production)

Got clean audio? Awesome! But hold off on the victory dance. In editors like DaVinci Resolve, Final Cut, or Premiere Pro, your focus shifts from avoiding distortion to keeping your sound consistent.

While your camera meters helped you avoid clipping during the shoot, your editing meters help you balance dialogue against music and ensure your video meets the loudness standards of streaming platforms.

True Peak vs. LUFS: Understanding the Difference

You will notice two different ways to measure sound during post-production.

  • True Peak Metering (Instant Volume): This is what you monitored while recording. It tracks the loudest, flashiest moments in your audio to identify potential distortion spikes. However,  it does not tell you how loud the video actually sounds to a human ear.
  • LUFS (Perceived Loudness): Standing for Loudness Units Full Scale, this measures the average volume over time. It shows how loud your video actually sounds to the audience.

The Golden Rule: Use Peak Meters to catch technical errors (clipping), but use LUFS to set your final volume levels.

Target Levels for Delivery

Different platforms have different “Loudness Standards.” If you upload a video that is too loud, the platform will forcibly turn it down (normalization). If it is too quiet, your viewers will struggle to hear it on mobile devices.

Here are the industry targets you should aim for in your final export:

1. The Internet Standard (YouTube, Spotify, Podcast)

  • Target: -14 LUFS
  • True Peak Max: -1.0 dBTP
    Most online streaming platforms optimize for -14 LUFS. If you mix your dialogue to hit this average, your video will play at the same volume as major YouTubers and music videos. Always leave 1dB of headroom (-1.0 dBTP) to ensure conversion to MP3 or AAC doesn’t cause distortion.

2. The Broadcast Standard (TV & Netflix)

  • Target: -23 LUFS (EU) or -24 LUFS (US)
  • True Peak Max: -2.0 dBTP
    Television standards are much stricter and generally quieter than the internet to allow for massive dynamic range (explosions vs. whispers).

Pro Tip: If you are creating content for YouTube, do not mix to broadcast standards (-24 LUFS). Otherwise, when viewers watch your video, it will sound quieter than other content on their feed. Stick to -14 LUFS for the web.

Visual Meters vs. Ears: Why You Need Both

Looking at your camera’s LCD or timeline isn’t enough to judge audio. Great shots can still have bad sound if you’re not careful. While audio meters provide objective data, they lack context.

Think of it this way: Meters measure quantity (Volume); your ears measure quality (Fidelity).

The Deception of the Meter

A visual meter is technically “blind.” It simply measures the electrical voltage of the signal coming in. It doesn’t know what a voice is and what a noisy air conditioner is. Therefore, it treats both sources in the same way.

You might look at your camera and see the levels bouncing perfectly in the -12dB to -6dB range. Visually, the audio looks safe. However, without headphones, you are missing important issues that the meter interprets as “healthy volume,” such as:

  • RF Interference: That subtle buzzing from a nearby phone often registers at a low volume, but it ruins the recording.
  • Wind Noise: Low-frequency wind rumble can push meters into the “safe zone” while drowning out the dialogue.
  • Lavalier Rustle: If a shirt rubs against the mic, the spike in volume looks like speech on a meter, but sounds like scratching on playback.
  • Reverb/Echo: A meter cannot tell you if the microphone is too far away, resulting in a “hollow” sound.

The “Trust but Verify” Workflow

To capture professional audio, you must synchronize your eyes and your ears. Use the following workflow to ensure your audio is both loud enough and clear enough:

  1. Eyes on the Meter: Use the visual display to set your Gain Staging. Also, watch your peaks so they don’t hit the red zone (0dB) or drop too low into the noise floor.
  2. Ears on the Monitor: Use closed-back headphones to isolate the sound. Listen specifically for background noise, distortion, and clarity.

3 Common Audio Metering Mistakes to Avoid

Avoid these pitfalls to ensure clean, usable sound every time.

1. “Riding the Red” (Zero Headroom)

It’s common for beginners to think louder is better and push levels toward 0dB. Here’s what really happens. 0 dB is the limit in digital video, and once you hit it, the audio clips and is permanently distorted.

  • The Fix: Always leave headroom. Aim for your peaks to hit between -12dB and -6dB. This safety buffer ensures that if your subject suddenly laughs or shouts, the signal has space to expand without hitting the digital ceiling.

2. Recording Too Low (The Noise Floor Trap)

On the flip side, some creators are so terrified of clipping that they record everything at extremely low levels, hovering around -40dB or -30dB. While this prevents distortion, it introduces a different problem: Hiss.

If your recorded signal is too close to the noise floor (a poor Signal-to-Noise Ratio), you will have to boost the volume significantly in editing. Every time you raise the voice level, the static increases with it, leading to an unpolished, amateur, and noisy sound.

  • The Fix: Be brave with your gain. Get your levels up to the “Yellow Zone” (around -18dB to -12dB minimum), so your voice sits far above the noise floor.

3. Ignoring Dynamic Range

Sound checks can be misleading. You ask someone to say a few words, they speak calmly, and the levels look fine. But here’s the plot twist! When the recording starts, speakers get excited, laugh, or talk louder, and the audio clips right away. That happens because the levels were set for average volume, not the dynamic peaks.

  • The Fix: When sound checking, ask the talent to perform the loudest part of their script or to laugh loudly. Set your gain so that the moment hits -6dB. With the Hollyland LARK MAX 2, you can use 32-bit Float recording to capture audio internally. What’s great about this is that it keeps the sound free of distortion, even when there’s loud noise like shouting or screaming.

Frequently Asked Questions (FAQ)

What is the ideal audio level for YouTube videos?

For YouTube, you need to balance two different measurements: Peak (the loudest instant) and Loudness (the average volume over time).

  • Dialogue Peaks: Your dialogue should generally peak between -12dB and -6dB. This ensures it is loud enough to be clear on mobile devices but leaves enough “headroom” so you don’t clip if the speaker laughs or shouts.
  • Overall Loudness (LUFS): YouTube normalizes audio to -14 LUFS. If your video is quieter than this, YouTube won’t boost it much, making your video sound weak compared to others. If it is louder, YouTube will turn it down. Aiming for -14 LUFS gives you the best consistency across the platform.

What happens if my audio meter hits red?

The “Red Zone” (specifically hitting 0dBFS) indicates clipping.

Analog tape can be pushed and still sound pleasant, but digital audio has a strict ceiling. When your signal hits 0dB, the top of the audio waveform is sliced off flat. This results in harsh, robotic distortion that is permanently baked into your footage. In 99% of cases, clipped audio cannot be fixed in post-production.

What is the difference between VU meters and Peak meters?

The difference lies in how they measure sound:

  • VU (Volume Unit) Meters: These measure the average loudness of the sound, similar to how human ears perceive volume. They are slower to react. You often see these as “needles” on older analog gear.
  • Peak Meters: These measure the instantaneous volume of the signal. They react instantly to sudden spikes (transients)
]]>
How to Calibrate Your Monitor for DaVinci Resolve: The Colorist’s Guide to Accuracy https://www.hollyland.com/blog/tips/calibrate-your-monitor-for-davinci-resolve Thu, 08 Jan 2026 08:01:04 +0000 https://www.hollyland.com/?p=496928 ]]> Learning how to calibrate your monitor for DaVinci Resolve is the single most important technical skill a colorist can master. If you have ever graded a project in DaVinci Resolve, exported it, and then panicked because the colors looked completely different in QuickTime or YouTube, you have experienced the “Confidence Crisis.”

But here’s the thing! The issue isn’t really your grading skills. Technically, this is how DaVinci Resolve manages color differently from other applications you have been running on your system. So keep reading to learn the right ways to match your monitor and software, so what you see is what you actually get. 

The “Confidence Crisis”: Why DaVinci Resolve is Different

Most applications (like Photoshop, Chrome, or Premiere Pro) rely on your Operating System Windows or macOS to manage color. When you calibrate your monitor and save an ICC Profile, the OS acts as a filter, adjusting the colors globally to ensure they look correct on your specific screen.

DaVinci Resolve does not work this way.

Instead of using your system’s color settings, DaVinci Resolve often uses its own process to keep things fast and on the dot.  It sends raw pixel data directly to your GPU. This leads to a dangerous disconnect:

  • Your Desktop: Is being corrected by your calibrated ICC profile.
  • Your Resolve Viewer: Is likely displaying a raw, uncorrected signal.

You might be adding contrast to your footage not because the video needs it, but because your monitor is washing it out, and Resolve isn’t using your calibration profile to fix it.

GUI Viewers vs. Reference Monitors

To make sure what you see is reliable, you will need to separate the two kinds of images in a color suite. Here they are:

  1. The GUI Viewer (Unreliable): This is the video window embedded inside the DaVinci Resolve software interface. Because it works through your operating system and GPU, you might see minor gamma shifts and color issues. It is fine for editing cuts, but it is not trustworthy for critical color grading.
  2. The Reference Monitor (Reliable): This sends a clean video signal directly from your PC, usually through a dedicated tool like DeckLink or UltraStudio, to a separate display.  This signal bypasses the OS and the GPU drivers. And since it doesn’t rely on your OS or graphics driver, the image stays pixel-perfect

If you are grading strictly off the GUI viewer without specific configuration, you are essentially guessing. The goal of calibration in Resolve is to bridge this gap, ensuring that what you see is mathematically accurate to the industry standard (Rec.709).


Essential Gear: What You Need Before Calibrating

You cannot calibrate a monitor with your human vision. It is because your eyes adjust to colors pretty fast. So if you’re seeing something that looks white in the morning, it can look blue at night.  To get an objective baseline for color grading, you need specific hardware to measure the output of your screen.

With DaVinci Resolve, a probe only gets you halfway there, and the signal path your image takes from your computer to reach your display is also equally important. 

The Signal Path: Why Your GPU is the Enemy

In a standard computer setup, your Operating System manages the color sent from your Graphics Card (GPU) to your monitor. It applies ICC profiles, “Night Shift” modes, and gamma corrections that you often can’t turn off. For color grading, you want a clean, unadulterated video signal.

There are three main ways to set this up:

Setup TypeHardware NeededCostAccuracy LevelWho is this for?
The “GUI” ViewerStandard Monitor + GPU (HDMI/DisplayPort)$0 (Existing Gear)Low/RiskyEditors doing rough cuts; not recommended for final grading.
Video Clean FeedSecondary Monitor + GPU$0 (Requires Studio Version)MediumYouTubers and Content Creators. Bypasses GUI clutter but still relies on OS color management.
Dedicated I/O (Reference)Blackmagic DeckLink or UltraStudio Monitor 3G$120 – $1,000+High (Bit-Perfect)Professional Colorists. Bypasses the GPU/OS entirely for a 100% pure signal.

The Calibration Probe: The “Eye” of Your System

No matter which signal path you pick, you will still need a device to check and read the actual light that’s coming from your screen.

  1. Entry Level (Datacolor SpyderX Pro/Elite): Affordable and fast. Great for standard sRGB/Rec.709 monitors, though they can struggle with shadow detail on higher-end displays.
  2. Mid-Range Industry Standard (Calibrite Display Plus HL): Formerly X-Rite. These are the workhorses of the industry. They can handle higher brightness levels (up to 3000 nits), making them essential if you are grading on a newer HDR-capable display or an Apple XDR screen.

Pro Tip: If you are on a budget, buy a Calibrite Display SL or Plus HL and use the free, open-source software DisplayCAL. It is widely considered more accurate than the manufacturer’s bundled software, though the profiling process takes longer.


Step 1: Hardware Calibration (The Probe Workflow)

Once you have your probe, you must create a reliable base profile for your monitor. Follow this standardized workflow using software like i1Profiler, Calman, or DisplayCAL.

  1. Warm Up the Monitor (30+ Minutes): Turn your monitor on at least 30 minutes before calibrating. Panels—especially OLEDs and older LCDs—shift colors as they heat up.
  2. Reset to Factory Defaults: Open your monitor’s OSD (On-Screen Display) menu and perform a factory reset. Turn off features like Dynamic Contrast, Eco Mode, or Auto-Brightness. These features actively fight against calibration.
  3. Set Target White Point to D65 (6500K): This is the industry standard for video. D65 mimics average midday light. While some graphic designers prefer D50 (5000K) for print, video strictly requires 6500 Kelvin.
  4. Select Your Target Gamma:
    • Gamma 2.4 (BT.1886): Choose this if you are grading in a dimly lit, controlled environment. This is the standard for broadcast and cinema.
    • Gamma 2.2: Choose this if you are grading in a bright office or exclusively for the web. Since most computer monitors and phones default to 2.2, this provides a safer “what you see is what they get” environment for social content.
  5. Set Luminance (Brightness):
    • 100 nits (cd/m²): The standard for a dim grading suite.
    • 120 nits (cd/m²): A practical standard for a typical office environment with some ambient light.
  6. Run the Profiling Sequence: Place the probe flat against the center of the screen. The software will flash color patches, measure the output, and generate an ICC Profile. Save this profile and ensure it is set as the system default.

Important Information about ICC Profiles

Once you complete this step, your Operating System will use this ICC profile to correct colors on your desktop and in web browsers. However, remember that DaVinci Resolve interacts with these profiles differently. Simply creating this profile is not the end of the road; you must now configure Resolve to recognize and respect this calibration.


Step 2: Configuring DaVinci Resolve Project Settings

If your physical monitor is calibrated to Rec.709 Gamma 2.4, but DaVinci Resolve is sending out a signal designed for a different standard, your grading decisions will be flawed.

To access these controls, open your project, click the Gear Icon in the bottom right corner (or press Shift + 9), and navigate to the Color Management tab.

The Configuration Checklist

  • Color Science: DaVinci YRGB For most users moving from Premiere or Final Cut, DaVinci YRGB is the standard choice. It is a “display-referred” workflow, meaning Resolve does not automatically transform your footage; you manually normalize your log footage to Rec.709 using nodes or LUTs.
  • Timeline Color Space: Rec.709 Gamma 2.4 This setting dictates the color space of your working timeline.
    • Rec.709: The industry standard for HD television and most web content.
    • Gamma 2.4: The broadcast standard for grading in a dim environment. Even if you are delivering for the web (Gamma 2.2), grading at 2.4 is industry best practice because it ensures your shadows have adequate contrast.
  • Data Levels: Video vs. Full (The “Auto” Trap) This is the most common cause of “milky” blacks or “crushed” shadows. This setting controls how black and white points are mapped (e.g., is “Absolute Black” 0 or 16?).
    • Auto: In theory, Resolve selects the right mode. However, practically, it is quite the opposite.
    • The Fix: Do not rely on “Auto.” Check your monitor’s internal menu. If your monitor is set to receive a “PC/Data” signal, force Resolve to Full. If your monitor is a TV or calibrated reference display set to “Video/Legal,” force Resolve to Video. A mismatch here will immediately invalidate your calibration.

Step 3: The “Video Clean Feed” Solution (Bypassing the OS)

If your setup lacks a proper dedicated I/O device, such as the Blackmagic Ultra Studio, do not expect Resolve to show accurate colors on the second monitor. The operating system overlays, window borders, and UI elements all interfere with how your graphics card renders the image.

The built-in solution for this is Video Clean Feed.

This feature forces DaVinci Resolve to send a full-screen, unencumbered video signal to a secondary display. This feature lets DaVinci Resolve send a clean, full-screen video signal to your second display. Plus, it gets rid of GUI clutter and tries to bypass some OS color interference.

How to Enable Video Clean Feed

  1. Ensure your secondary monitor is connected and detected by your OS.
  2. Open DaVinci Resolve.
  3. Navigate to the top menu bar and select Workspace > Video Clean Feed.
  4. Select the monitor you wish to use as your reference display.

The Limitation: GPU vs. Dedicated I/O

While Video Clean Feed is a massive improvement over the standard GUI viewer, it is essentially a “software hack.” Since the signal still goes out through your Graphics Card (GPU), it is still affected by your operating system. 

Pro Tip: When “Good Enough” Isn’t Enough If you are grading for broadcast television, Netflix, or theatrical release, you cannot count on Video Clean Feed. To achieve 100% signal accuracy, you must bypass the Operating System entirely using a Blackmagic DeckLink or UltraStudio. They take the video data right from Resolve and send a pure SDI or HDMI signal to your screen.


Advanced Value-Add: Solving the “Mac Gamma Shift” (1.96 vs 2.4)

Editing on a Mac leads to another horror, and we call it the “QuickTime Nightmare.” What happens is that you spend hours fine-tuning contrast, and when you export and open it in QuickTime or Safari, the visuals are washed out, flat, and desaturated

So, is there something wrong with your grading expertise that creates this mess? Not at all. In reality, this happens because of the conflict between the industry standards and Apple’s ecosystem. DaVinci Resolve operates on Gamma 2.4, while macOS ColorSync manages video playback using a gamma closer to 1.96.

Solution 1: The “Rec.709-A” Tag Trick

The simplest way to make QuickTick match what you saw in DaVinci is by adjusting the metadata tags when exporting the file.

  1. Go to the Deliver Page in Resolve.
  2. Scroll down to the Advanced Settings section.
  3. Locate Gamma Tag.
  4. Change it from “Same as Project” to Rec.709-A.

The “A” stands for Apple. This tag essentially tricks macOS into applying the correct display transform, ensuring your contrast looks correct on Mac screens, iPhones, and iPads.

Solution 2: “Use Mac Display Color Profiles for Viewers”

If you are grading directly on a Mac display (like an iMac or MacBook Pro) without an external I/O device, you can force the Resolve Viewer to match the OS.

  1. Navigate to DaVinci Resolve > Preferences > System > General.
  2. Check the box for “Use Mac Display Color Profiles for Viewers.”
  3. Restart DaVinci Resolve.

Warning: Do not use this setting if you are using a dedicated external reference monitor via a DeckLink. It will cause a mismatch between your clean feed and your GUI.

Myth vs. Fact: The Export Crisis

The MythThe Fact
“My export is broken.”“Your export is fine; the player is wrong.”
Users often panic, thinking they rendered the file incorrectly because the blacks look gray in QuickTime.If you re-import that “washed out” file back into DaVinci Resolve, you will see the contrast returns perfectly. The data is safe; the video player is simply interpreting the gamma curve incorrectly.

Loading a 3D LUT into Your Monitor (For High-End Displays)

If you’re chasing maximum accuracy, hardware 3D LUTs can go further than any software ever can. However, this option requires a professional display, such as an EZIO CG, ASUS ProArt, or LG OLED.

A Hardware LUT workflow bypasses your computer’s graphics card entirely. The color correction instructions are uploaded directly into the monitor’s internal memory. This means the monitor corrects the image after it receives the signal but before it displays the pixels.

The Workflow: Using DaVinci as a Pattern Generator

To generate a highly accurate 3D LUT, you shouldn’t rely on static test images. You should use DaVinci Resolve to generate the color patches.

  1. Connect Your Probe: Plug your colorimeter into your computer.
  2. Connect the Data Cable: Ensure the USB upstream cable is connected from the monitor to the PC (allowing the software to “talk” to the monitor).
  3. Enable Remote Grading: In DaVinci Resolve, go to Workspace > Monitor Calibration and select your software (e.g., Calman, ColourSpace).
  4. Run the Calibration: In your calibration software, connect to DaVinci Resolve as the “Pattern Source.” DaVinci will automatically flash the necessary color patches on your screen.

Once calculated, upload the resulting 3D LUT directly to the monitor’s firmware slot (e.g., “User Mode 1”).


Frequently Asked Questions (FAQ)

What is the best Gamma for DaVinci Resolve: 2.2 or 2.4?

The correct choice depends on your viewing environment.

  • Gamma 2.4 (The Standard): Use this for Rec.709 broadcast work in a dimly lit, controlled environment. It provides the most accurate contrast and shadow depth.
  • Gamma 2.2 (The Web/Office): Use this if you are grading in a bright office or exclusively for YouTube/Social Media, where users watch on bright screens.

Can I calibrate my monitor without a probe?

No. You cannot accurately calibrate color with the naked eye. Human vision is prone to chromatic adaptation. If you stare at a screen with a slight blue tint for 10 minutes, your brain will “auto-white balance” and tell you that the screen is pure white. You can adjust Brightness and Contrast using PLUGE patterns (SMPTE bars), but not color temperature.

Why does my video look different on my phone?

This happens for three reasons:

  1. No Calibration: Phones are factory calibrated to look “punchy,” not accurate.
  2. Different Color Spaces: High-end phones often use the P3-D65 color space, which is wider than Rec.709.
  3. User Settings: “True Tone” and “Night Shift” drastically alter the image. The Solution: Trust your calibrated monitor and your scopes. If your scopes say the image is balanced, it is balanced. You cannot control how every uncalibrated device in the world displays your work.

Conclusion & Final Checklist

Color accuracy is a journey, not a destination. Even with the most expensive probe and a high-end reference monitor, displays drift over time. Once you follow the workflows discussed in this guide, you turn “guessing” into actually “knowing.”

If you take away one thing from this guide, let it be this: If you cannot trust your monitor, trust your scopes.

Keep in mind that human eyes get used to warm light and get tired after hours of grading. Scopes, like Waveform, Vectorscope, and Parade, don’t do that, and they show you the image signal as pure math (as they are). They will tell you if your blacks are crushed or your skin tones are too green, regardless of what an uncalibrated screen displays.

Your Pre-Grade Calibration Checklist

Before you start your next color session, run through this quick audit:

  • Warm-Up: Has the monitor been on for at least 30 minutes?
  • Environment: Is the room lighting controlled (dimmed) and neutral?
  • OS Settings: Are “Night Shift,” “True Tone,” or “f.lux” disabled?
  • Resolve Settings: Is your Timeline Color Space set to Rec.709 Gamma 2.4?
  • The “Eye” Test: Do your skin tones look natural on your reference screen compared to your scopes?

Don’t let a lack of expensive hardware stop you from creating. Learn to read the data, and your grades will look professional on every screen.

]]>
How to Add Music to Your Twitch Stream (Step-by-Step) https://www.hollyland.com/blog/add-music-to-your-twitch-stream Tue, 06 Jan 2026 10:51:38 +0000 https://www.hollyland.com/?p=493033 ]]> Suppose you’ve ever gone live on Twitch and felt like something was missing. Your gameplay is solid, your mic sounds good, but the vibe is still a little empty. Music is usually the missing piece. But for many streamers, adding music becomes frustrating fast. Copyright warnings, muted VODs, messy audio settings, Spotify not being picked up, or the fear of “doing it wrong” stop creators from using music altogether.

This guide breaks everything down into clear, beginner-friendly steps so you can safely add music from Spotify, YouTube, local files, and other sources. 

What You Need Before Adding Music to Your Twitch Stream

Before we jump into the methods, make sure you understand how Twitch handles music and how your streaming software processes audio.

Check Music Permissions

Twitch does not allow streaming copyrighted songs that you don’t own the rights to. Even if the music plays fine during the live broadcast, your VOD may get muted afterwards. Before choosing any music source, ensure you’re using:

  • Royalty-free music
  • Music you own or licensed
  • Soundtrack by Twitch
  • Independent artist-licensed tracks
  • Streamer-friendly playlist platforms

How to Add Music to a Twitch Stream Using Local Audio Files

To add local music files to a Twitch Studio stream, you’ll need to use a different application since Twitch Studio does not have a dedicated media source for local files. Local audio files (MP3, WAV, OGG) are ideal because they give you full control and no copyright problems if you legally own or have licensed them.

Option I – Adding Local Music Files in OBS Studio

  • Now, click on VLC Video Source.
  • Name the source (e.g., Music), and click OK.
  • Set the desired options for your playlist.
  • Click on the + icon.
  • Select the Add Files option.
  • Choose your MP3/WAV file playlist and click Open.
  • Check Local File and click OK.
  • The Music will be added to the Audio Mixer. You can use it to lower music volume and ensure it doesn’t overpower your microphone.

Option 2 – Adding Local Music Files in Streamlabs Desktop

If you are streaming with Streamlabs, here’s the quick method to add local music to your Twitch streams.

  • Open Streamlabs Desktop and go to Sources.
  • Click on Media file → Add Source.
  • Scroll and select your music.  If you want to add a new source instead, toggle that and click Add source.
  • You can see the Music added in the Mixer bar. Adjust the volume levels and hit the Go Live button.

How to Add Spotify or YouTube Music to Your Twitch Stream

Spotify and YouTube are common music sources, but remember, they are not licensed for VODs. These are popular but must be added carefully due to copyright rules. Below, we have discussed the steps to add Spotify music to Twitch streams. Similar methods can be followed to add YouTube music too.

I. Using Spotify to add Music to Twitch through OBS Studio

You can use Spotify in two ways:

  • Through a web browser (Chrome, Firefox, Edge, etc.)
  • Through the Spotify desktop application

Either option works for capturing audio inside OBS.

Option # A – Spotify through Web Browser

  • Open Spotify through your browser(say Chrome). Once your music source is ready, move back to your OBS Desktop window.

Method #1-  Capture Spotify Using Desktop Audio

  • Check if Desktop Audio moves in the OBS Audio Mixer
  • If you do not see the Desktop audio, click on Settings in the right corner.
  • Next, click Audio choose the device that plays your system audio (e.g., Speakers, Headphones, Realtek, etc.) → click OK.
  • When you play a song, OBS will start detecting it through Desktop Audio. You’ll see the audio levels move inside the Audio Mixer.

This confirms the sound is being captured for your stream. This is the simplest method because OBS automatically captures whatever plays on your computer.

Method #2 – Isolate Spotify Using Application Audio Capture

If you want a more direct and controlled way to add music to your Twitch streams. Now we move to the method where you do not use Desktop Audio.

This lets you isolate only Spotify audio and prevent other desktop sounds from being captured.

  • At the bottom of OBS, click the + button in the Sources panel.
  • Choose the Application Audio Capture (BETA) option.
  • Rename the source (example: Spotify Google Chrome, or any browser you are using) and click OK.
  • In the configuration box, go to the Window dropdown, choose the browser tab where Spotify is open, for Example: [Chrome.exe], and click OK.
  • A new audio bar labelled Spotify Google Chrome appears.

This means only Spotify is being captured, and no other browser sound or desktop notifications. This completes the Browser Method using Audio Capture.

Option B –  #Spotify using the Desktop App

This method lets you capture only the Spotify app’s audio, without capturing your entire system sound. It’s perfect if you want clean, isolated music control.

  • Install the Spotify Desktop app.
  • Go to the Sources panel at the bottom of OBS. Click the + (Add) button.
  • Choose Application Audio Capture (BETA) from the list.
  • A naming window will appear. We call this the “Spotify App” and click OK. This helps differentiate between Spotify via browser and Spotify via app.
  • In the Window dropdown, look for the Spotify application.
  • It may appear with different system-level identifiers, such as spotify.exe. Next, click OK.
  • OBS will now isolate and listen only to the Spotify app’s audio. A new audio bar labelled Spotify App will appear in the Audio Mixer.

II. Using Spotify to add Music to Twitch through Streamlabs

If you stream through Streamlabs, follow these steps:

  • Click on the + icon in the Sources menu.
  • Choose Application Audio Capture (BETA) Add Source.
  • Name the source, say ‘Music 2’, → Add Source.
  • If you’re adding multiple audio or video sources or managing different music tracks across scenes. Having a stable, high-quality backup workflow becomes essential. 
  • Devices like the Hollyland VenusLiv Air help with this by supporting 24/7 streaming, high-resolution capture, and clean audio input. Also, ideal for keeping a master recording even when your Twitch VOD excludes certain music tracks.
  • Now choose the music source you want to add. 
  • For Spotify, ensure you have Spotify music running in the background through the browser or app.
  • Choose [Spotify.exe] under Windows.
  • “Music 2” now appears in the Mixer. This ensures Spotify music is successfully added to be streamed on Twitch through Streamlabs.

Conclusion

Add music to your Twitch stream confidently by choosing rights-cleared sources, isolating music with application capture, and utilising multi-track routing to ensure VODs remain clean. Use Soundtrack by Twitch or licensed libraries for the safest results. For pro workflows and secure master copies, pair software routing with hardware like the Hollyland VenusLiv Air 2 to archive edit-ready recordings.

FAQs

Q1.  Can I use Soundtrack by Twitch to avoid VOD muting?

Not any longer. Twitch officially shut down the Soundtrack by Twitch service on July 17, 2023. This means you can no longer add new tracks to it. Instead, you’ll need to use other rights-cleared music libraries or ensure routing and licensing compliance manually.

Q2. How do I keep music out of my Twitch VODs while still letting it play live?

Use multi-track routing in your encoder. In OBS (Advanced Output), enable multiple audio tracks, assign mic/game to the stream/VOD track, and assign music to a stream-only track. For Streamlabs, do partial routing. Twitch Studio lacks VOD routing. Test local recordings to confirm the final VOD file has no music.

Q3. How do I stop my VOD from getting muted if I use Spotify for live music?

Mute Spotify from the VOD track by assigning it only to the live audio track using OBS’s Advanced Audio Properties. This allows followers to hear music during the live stream but keeps it out of recordings and VODs. Always test your audio tracks before starting a session.

Q4. How can I allow song requests safely without risking copyright strikes?

Enable requests only from vetted libraries (Soundtrack, Pretzel, Lickd, royalty-free lists). Configure your song-request bot to accept approved sources only, block open Spotify/YouTube requests, and log every played track. Review user submissions before adding them live. This reduces unexpected copyrighted songs being played on stream or saved to VODs.

Q5. Do I need extra hardware for backups and better audio control?

Not necessarily. Most streamers manage music and routing using software tools like OBS or Streamlabs. However, audio interfaces and mixers such as GoXLR, Elgato Wave Link, or Beacn Mix can make balancing channels easier. Hardware isn’t mandatory, but it can improve workflow, consistency, and overall sound quality.

]]>
How to Stream The Voice Show (Free & Paid) https://www.hollyland.com/blog/tips/stream-the-voice-show Tue, 06 Jan 2026 10:44:03 +0000 https://www.hollyland.com/?p=492982 ]]> The Voice is one of TV’s most-watched music competitions. Can’t wait to see which coach turns their chair this season? Now, you can stream it live without cable subscriptions.

And if you regularly create or stream your own reaction videos or commentary, a tool like Hollyland VenusLiv Air can help you broadcast in 4K 30 fps with 24/7 stable streaming. It’s perfect for creators wanting quality streams without complex camera setups.

NBC, which airs The Voice, makes the show available through different platforms. So no matter what device you use, you can enjoy every performance in just a few clicks.

How to Watch The Voice for Free

Before signing up for a subscription, try these official, cost-free ways to watch or catch episodes online. You will need either a stable internet connection or a simple over-the-air setup.

Method #1 – Over-the-Air (OTA) via NBC

If you prefer true live viewing without the internet, an over-the-air antenna can be your best bet.

  • Check if your local NBC affiliate broadcasts over the air at your address or not.
  • Connect a digital antenna to your TV.
  • On your remote, go to Settings → Channel Scan.
  • Once NBC appears in your channel list, you’re ready to tune in live on Mondays and Tuesdays 8pm ET.

Why it works: NBC is a free broadcast network. Once your antenna is set, you can watch The Voice at no ongoing cost.

Method #2 – Watch Highlights & Performances on The Voice’s Official YouTube Channel 

If you don’t need full episodes, The Voice uploads free official clips, highlights, auditions, battles, live performances, and behind-the-scenes moments to its verified YouTube channel. It’s a great way to stay updated without any subscription.

Official Channel: www.youtube.com/@nbcthevoice

For Desktop 

  • Open your browser and go to the official YouTube channel
  • Sign in with your Google account (optional, but needed for subscribing).
  • Scroll to playlists like Auditions, Battles, Knockouts, Live Shows, or Best Performances.
  • Click any video → watch free clips in HD.
  • You can subscribe to get notified whenever new performances are uploaded.

For Mobile (YouTube App)

  • Open the YouTube app on your phone.
  • Tap the search bar → type “The Voice NBC” → choose the verified channel with the blue check.
  • Explore the Uploads and Playlists tabs for the latest performances.
  • Tap any video → rotate your device horizontally for full-screen HD.
  • Tap Subscribe and Bell Icon to get instant alerts for new clips.

Method #3 – Use Free Trials on TV Platforms

Some of the live-TV streaming services offer a free trial or limited free access during major events. You can try these for a short viewing experience:

  • fuboTV: It has the NBC channel and offers a 5-day free trial.
  • Hulu + Live TV: Includes 95+ channels with a 3-day free trial.
  • DirecTV Stream: Offers NBC with a 5-day trial.
  • Peacock TV: Gives a 7-day free trial.
  • YouTube TV: It provides a 10-day free trial for new users.

How to Stream The Voice via NBC Platforms

NBC provides both live and on-demand access to The Voice through its official platforms. Whether you’re on a computer or a smartphone, setup takes only minutes.

Option A – NBC.com Desktop

  • Go to nbc.com and sign up for your account.
  • Scroll or search for The Voice.
  • Select Episodes or Live TV (available during broadcast).
  • Sign in with your TV provider if prompted.
  • Now you can begin watching directly in your browser.

Option B – NBC App (Mobile)

  • Download the NBC app from the App Store or Google Play.
  • Log in or link your participating TV provider.
  • Tap Watch Live and select NBC or search for The Voice from the Shows tab.
  • Start streaming on your mobile device.

How to Stream The Voice via Live-TV Apps (Paid Options)

If you would rather watch live or record episodes for later, choose one of these live-TV streaming services. Each includes NBC in most markets and works across desktop, mobile, and smart-TV devices.

1. Peacock Premium

Before you look elsewhere, NBC’s own streaming service, Peacock, is the most direct way to watch The Voice. It offers both live and on-demand episodes, along with exclusive NBC content.

  • $11/month or $110 annually (Premium) | To watch without ads, $17/month or $170 annually (Premium Plus).
  • Log in or create an account on Peacock.
  • Search for The Voice in the search icon or find it in TV shows.
  • Select to stream The Voice on any device.

2. Hulu + Live TV

If you want an all-in-one package with live channels, sports, and on-demand entertainment, Hulu + Live TV is a great choice. It combines NBC access with Disney+ and ESPN+ under one plan.

Has different base plans with an unlimited channel plan costing $44.99/month.

  • Sign up to hulu.com → confirm NBC in ZIP.
  • Open Live TV tab → choose NBC → watch The Voice.
  • You can also directly search for it by clicking the icon in the top-right corner.

3. fuboTV

Looking for a sports-heavy live-TV service that also includes NBC nationwide? fuboTV gives you local channels, cloud DVR, and multi-device access.

  • Starts at $14.99-$94.99/month Pro plan; free trial available.
  • Sign up to fubo.tv and create your account → choose a plan with local NBC.
  • Now open app/web → Go to Live → select NBC.
  • Click on Play and stream the show.

4. DirecTV Stream

If you prefer a traditional cable-style layout but want to watch online, DirecTV Stream offers the familiar experience, complete with local NBC access.

  • Starting at $79.99/month (Entertainment plan; 5-day trial).
  • Browse directv.com and sign in. 
  • Select a package including NBC in your ZIP area. 
  • Now select NBC and watch The Voice via app or browser.

5. YouTube TV

Prefer a sleek, feature-packed interface? YouTube TV includes NBC, unlimited DVR storage, and simultaneous streaming across multiple devices. It is perfect for fans who never want to miss an episode.

YouTube TV Base Plan: $72.99/month for your first 3 months, $82.99/month after that.

  • Sign in with a Google account.
  • Tap Live → select NBC from he list of channels.
  • Now search for The Voice and stream.

Conclusion

Streaming The Voice is simple: watch next-day episodes free on Peacock, tune in live on NBC via antenna, or subscribe to a live-TV service for full access. Choose your platform, sign in, and enjoy the music, drama, and the thrill of the chair turns, all without cable.

FAQ

Q1. Can I watch The Voice on Roku or Fire TV?

Yes. You can download the NBC or Peacock app on your Roku or Fire TV device. After signing in, search for The Voice and start streaming instantly. You can also access the show via the Hulu + Live TV or YouTube TV app if those are installed on your device.

Q2. Can I watch The Voice outside the U.S.?

NBC and Peacock are currently geo-restricted to the U.S. due to licensing agreements. If you’re traveling abroad, you’ll still find official clips and highlight videos on The Voice’s YouTube channel, but full episodes are unavailable internationally. Always check local platform rights before using any streaming workaround.

Q3. Do I need cable to watch The Voice live?

Yes, for live TV streams. NBC.com often prompts users to sign in with a participating TV provider for access. If you don’t have one, you can instead stream episodes the next day on Peacock or subscribe to a live-TV streaming service like Hulu + Live TV, YouTube TV, or fuboTV for full access.

Q4. Where can I watch older seasons?

Yes. Full previous seasons and standout performances are available to stream on Peacock, which houses NBC’s on-demand library. You will also find select clips, finals, and best of compilations on YouTube TV’s official channel. For fans who love binge-watching, Peacock’s Premium plan is the best option.

Q5. What if NBC isn’t available in my area?

If NBC doesn’t appear on your streaming service due to regional restrictions, try using an over-the-air (OTA) antenna to access your local NBC affiliate for free. Alternatively, switch to a service like fuboTV or DirecTV Stream, which covers NBC in most ZIP codes across the U.S.

]]>
How to Stream Hockey Games Online: The Ultimate Viewer’s Guide https://www.hollyland.com/blog/tips/stream-hockey-games Tue, 06 Jan 2026 10:24:23 +0000 https://www.hollyland.com/?p=492951 ]]> With the NHL season in full swing, the days of worrying about missing a crucial face-off when away from home are over. Streaming has made live games, replays, condensed matches, and quick highlights effortlessly accessible. 

Technological advancements have also enhanced the viewing experience. When content is produced using professional, robust tools like the Hollyland VenusLiv Air, which supports 24/7 streaming and features AI-powered tuning, online viewers can feel as immersed in the game as if they were attending in person.

This guide breaks down how to stream Hockey games online, choose the best platforms, and ensure a lag-free experience. 

Top 5 Platforms for Streaming Hockey Games

Here are the top five platforms for streaming the NHL, complete with pros, cons, and pricing to help you find your perfect fit.

Hulu + Live TV (US)

Hulu + Live TV is a comprehensive, all-in-one legal solution for US Hockey viewers and offers the best value for money. It bundles access to all required national channels (ESPN, TNT, ABC), local Regional Sports Networks, and the entire ESPN Select library within one subscription.

  • Disadvantage: It does not include the dedicated NHL Network channel, which airs some exclusive live regular-season games. 
  • Pricing: Hulu + Live TV monthly subscription costs $89.99.

ESPN Select (US)

ESPN Select (formerly ESPN+) is the most affordable way to watch most hockey games. This platform offers 1000+ out-of-market games, plus 75 exclusive national games. It is ideal for fans of non-local teams who want to follow their games affordably. 

  • Disadvantage: It is not a comprehensive solution as you’re likely to miss many exclusive games, including the Stanley Cup Playoffs, especially if you follow a top team. 
  • Pricing: ESPN monthly subscription costs $12.99.

Sportsnet+ Premium (Canada)

Sportsnet+ Premium offers the most extensive coverage of Canadian teams, the Stanley Cup Playoffs, and over 1000 out-of-market games, including all U.S. vs U.S. matchups. 

  • Disadvantage: You will still miss regional games exclusive to the TSN network and the Prime Monday Night Hockey games on Amazon Prime Video. 
  • Pricing: Sportsnet+ monthly subscription costs $42.99 CAD.  

DirecTV Stream (US)

DirecTV Stream offers a comprehensive channel lineup, but its main appeal for sports fans is the new, relatively affordable MySports package, which focuses on sports. The base package includes 25+ national sports channels, the NHL Network, and the full ESPN Select library.

  • Disadvantage: Access to local Regional Sports Networks requires an extra add-on. 
  • Pricing: The MySports package is $69.99 per month. The MyHome Team add-on is $19.99 per month, bringing the total to $89.98 per month. 

YouTube TV (US)

YouTube TV provides access to all national channels and most Regional Sports Networks. You’re guaranteed an excellent user interface, reliable stream quality, and robust Cloud DVR storage that retains recordings for up to nine months.  

  • Disadvantage: It does not include ESPN Select for out-of-market games and the dedicated NHL network channel.
  • Pricing: YouTube TV base plan is $82.99 per month.

Conclusion

There’s a lot of confusion surrounding online hockey streaming, primarily due to complex broadcast rights and mandatory blackout rules that can cause fans to miss crucial games. This guide helps you cut through the confusion and select the best streaming platform for your needs.

FAQs

How do NHL blackout restrictions work?

NHL blackouts are geographic restrictions that require you to use a specific channel to watch a game in your area. If your local or regional sports team is airing a game, it is blacked out on out-of-market streaming packages, forcing you to watch it on the channel with exclusive local broadcast rights. 

Can I use a VPN to bypass blackouts?

Yes, a VPN can bypass blackouts by changing your virtual location. However, this violates the streaming service’s terms of service and can lead to app functionality issues or account suspension. 

Which streaming service should I choose to watch every NHL game?

In the US, Hulu + Live TV is widely regarded as the most comprehensive single subscription service, providing access to most local, national, and out-of-market NHL games. In Canada, you need to combine Sportsnet+ Premium, TSN+, and Amazon Prime Video to cover all games. 

Can I legally watch NHL games online for free?

Live NHL games usually require a paid subscription to a streaming service. However, you can use the free trial of many streaming services to watch NHL games for a limited time. Tubi also occasionally streams selected live NHL games, but it can’t replace a paid subscription.

How can I ensure a smooth NHL streaming experience?

A smooth NHL streaming experience requires adequate internet bandwidth, preferably at least 25Mbps for HD, and a stable connection. Use a wired Ethernet connection instead of Wi-Fi, and limit other network activity while streaming. 

]]>
Seamless Communication for High-End Event Filmmaking – Powered by Hollyland Solidcom SE https://www.hollyland.com/blog/seamless-communication-for-high-end-event-filmmaking-powered-by-hollyland-solidcom-se Tue, 30 Dec 2025 10:31:13 +0000 https://www.hollyland.com/?p=486271 ]]> About Well Naves and His Team

Well Naves is a Brazilian filmmaker and creative director focused on premium cinematic productions. He leads a small, highly specialized team dedicated to delivering high-end visual storytelling for weddings, commercial clients, and brand collaborations.

Their signature style is based on clean cinematic composition, static-shot aesthetics, intentional lighting and a refined luxury visual language. They have worked with brands such as Realme, Nanlite and Hollyland, always producing visually polished and emotionally engaging content with strong narrative direction.

Testing Background

Well Naves and his team executed two projects in São Paulo, Brazil, including a premium filmmaking production documenting a high-end event at Paróquia São José and the reception venue in Holambra.

The project focused on cinematic, storytelling-driven coverage, capturing behind-the-scenes moments, team coordination, and real-time communication powered by Hollyland equipment. The goal was to convey the event’s elegance, scale, and emotional depth while maintaining a refined, high-end visual aesthetic.

A total of 8 people used Hollyland products during the event, both at the church (Paróquia São José) and at the reception venue. The Solidcom SE headsets were distributed across the entire production team, including camera operators, assistants and coordination crew, ensuring seamless communication throughout the ceremony and the buffet.

The Project Scale and Environment

Both projects were medium-to-large scale productions, involving multiple camera operators, lighting setups and dynamic team coordination. The wedding had approximately 200 guests, while the Holambra production required extensive movement, wireless monitoring and precise team communication throughout several hours of filming.

The work environment included both indoor and outdoor conditions. At Paróquia São José, the project was filmed indoors in a large church setting with controlled lighting but constant movement of guests and staff. In Holambra, the production took place outdoors, in open natural environments with changing light conditions and frequent transitions between different locations. Both environments required reliable wireless communication and stable video transmission to maintain coordination across the team.

How Does Solidcom SE Enhance Communication?

Instant Communication, Seamless Coordination

“The communication was instant. It felt like we were working as a single unit the entire day.”

The system also features Auto Frequency Hopping technology, which enables automatic frequency hopping when encountering interference, making the communication system more stable and reliable.

Smoother Workflows

“The headsets made everything faster. No shouting, no running around, everything flowed smoothly.”

Unlock seamless team talk with a diverse range of options. Whether it’s for small events or medium-scale productions, we’ve got you covered. Choose the perfect fit to cater to your communication requirements.

Enhanced Team Collaboration

“Honestly, it was the first time we worked this organized during a big event. The system really made a difference.”

Solidcom SE Global Version supports connecting multiple remote headsets, catering to the communication needs of customers across different settings, from small- to medium-sized collaborative events.

Stay Connected

“Using the intercom saved us from missing key moments. It kept the whole team connected at all times.”

Operating over the 2.4 GHz frequency band, Solidcom SE Global Version guarantees a stable and steady connection over distances of up to 1,100ft (350m), making it ideal for both indoor and outdoor environments.

Wrapping Up

“During the ceremony at Paróquia São José and later at the reception venue, the headsets ensured continuous communication among the eight team members as we coordinated camera positions, lighting adjustments and guest flow. Throughout the entire day, the headsets contributed to smoother coordination and faster decision making.”

–Well Naves

To see these headsets in action, check out the behind-the-scenes footage below. It offers a closer look at how integrated communication seamlessly elevates the production workflow and empowers filmmaking teams to work with greater precision.

https://www.instagram.com/wellnaves/reel/DRUrdEokURi

]]>
Connected Without Compromise: How Hollyland Solidcom H1 Powered Communication for Foo Fighters Live in Jakarta https://www.hollyland.com/blog/application/connected-without-compromise-how-hollyland-solidcom-h1-powered-communication-for-foo-fighters-live-in-jakarta Fri, 19 Dec 2025 08:55:10 +0000 https://www.hollyland.com/?p=480652 ]]> For large-scale live events, seamless communication is the backbone of every successful show. During Foo Fighters Live in Jakarta, Hollyland’s Solidcom H1 intercom system delivered reliable, flexible, and resilient crew communication, ensuring every moment on stage and behind the scenes stayed perfectly in sync, even under challenging outdoor conditions.

“Moments in an event can’t be repeated, so communication between crew members is crucial.”

Meet Agung Wicaksono

Based in Indonesia, Agung Wicaksono is an experienced equipment director specializing in large-scale live events and professional communication systems. His work focuses on building reliable, scalable intercom solutions for concerts, festivals, and complex outdoor productions where timing, coordination, and redundancy are essential.

For Foo Fighters Live in Jakarta, Agung and his team were responsible for designing and operating the on-site communication system that connected multiple departments across a demanding live concert environment.

Challenge & Setup

The Foo Fighters concert took place on October 2, 2025, at Carnaval Ancol, Jakarta, an outdoor beachside venue. The environment presented immediate challenges. Heavy rain the day before the event left the site damp and unpredictable, increasing the risk of signal instability and equipment failure. With a large crew operating across stage management, audio, lighting, and technical coordination, uninterrupted communication was non-negotiable.

Traditional communication setups often struggle in environments like this, especially when channel distribution is limited or reconfiguration is slow. For this project, the team needed a system that could support multiple independent communication groups, maintain stable coverage across a wide outdoor area, and operate reliably throughout long event hours.

To meet these demands, Agung deployed the Hollyland Solidcom H1 intercom system, with the Geo Central Station serving as the core control unit. The system was configured for 10 users with two antennas and linked to an existing Clear-Com wired system, allowing seamless integration between wireless and cabled communication networks.

Solving Real On-Site Communication Pain Points

Once deployed, the Solidcom H1 system quickly became central to the event’s operational workflow. Each beltpack could be assigned to its own communication channel, giving the crew far greater flexibility compared to more rigid traditional systems. Departments could coordinate independently while still remaining connected through the central station.

Seamless roaming proved critical on the day of the show. As crew members moved between stage, FOH, and backstage areas, communication remained continuous without manual reconnection or signal drops. This allowed the team to respond instantly to changes during rehearsals and live performance, even as conditions shifted throughout the evening.

Battery reliability was another key factor. With long production hours and limited opportunities for downtime, the Solidcom H1’s extended battery life ensured uninterrupted use throughout the event. The system also allowed operators to monitor signal status and battery levels in real time, reducing uncertainty and helping the technical team stay ahead of potential issues.

“Many people underestimate the importance of communication tools during an event, even though they are crucial.”
— Agung, Equipment Director

While the overall system delivered strong performance, Agung noted areas for future improvement, such as a more intuitive matrix-style user interface and potential development of wired PoE beltpack options. Even so, the flexibility and channel capacity enabled by the Geo Central Station within the Solidcom H1 system marked a clear step forward compared to previous setups.

Takeaways

For Foo Fighters Live in Jakarta, the Hollyland Solidcom H1 provided a communication system built for scale, mobility, and reliability. With support for multiple independent groups, seamless roaming, and long battery life, the system enabled smooth coordination across a complex outdoor concert environment.

From large concerts to multi-department live productions, Hollyland’s professional communication solutions empower teams to stay connected when it matters most, ensuring that every cue, transition, and moment happens exactly on time.

“A great event begins with an excellent communication team.”

]]>
Directing Frame by Frame: Pyro 5 & Pyro 7 Power Julian Curi’s Paper-Cut Films https://www.hollyland.com/blog/application/directing-frame-by-frame-pyro-5-pyro-7-power-julian-curis-paper-cut-films Fri, 19 Dec 2025 08:45:48 +0000 https://www.hollyland.com/?p=480616 ]]> Hollyland delivers a cable-free monitoring workflow that keeps delicate, detail-driven filmmaking precise, efficient, and creatively focused.

Meet Julian Curi

Julian Curi is an independent animator and filmmaker based in the United States, specializing in paper-cut puppetry animation. As the sole writer, director, photographer, editor, and animator of his short films, Julian oversees every stage of production himself. He is best known for his short film Gruff, which has gained wide recognition on YouTube for its meticulous craftsmanship and distinctive visual style. Working at the intersection of animation and filmmaking, Julian’s process demands absolute control over framing, timing, and visual detail.

Small Scale, High Precision

Julian’s recent projects included both indie filmmaking and a small commercial shoot, all produced within the tight confines of a home studio. While the scale was modest, the demands were anything but. Paper-cut animation operates at a miniature level, where the smallest adjustment in framing, lighting, or puppet position can dramatically alter the final result. With two people working in close proximity and often performing actions simultaneously, the workflow needed to be efficient, flexible, and interruption-free.

In such a setting, traditional wired monitoring quickly becomes a bottleneck. Cables restrict movement, clutter the workspace, and force creators to repeatedly walk back and forth between the set and a fixed monitor. For a process built on patience and precision, these interruptions break focus and slow creative momentum.

Challenge & Solution

Reducing Physical Barriers on Set
Working in a very small home studio meant every inch of space mattered. Excess cables not only crowded the set but also made it difficult to reposition monitors or adjust framing quickly. By switching to Hollyland’s Pyro 5 and Pyro 7 wireless transmission system, Julian eliminated unnecessary cabling entirely. The monitor could move freely with the action, allowing framing and composition decisions to be made directly on set, in real time, without stopping the workflow.

Maintaining Accuracy During Delicate Performances
On the first day, Julian and his partner used the 4K transmitter with the Pyro 5 to review framing and composition for a self-made commercial advertisement. On the second day, both creators were actively performing actions on a small-scale puppet film while transmitting footage from the Pyro 5 to the larger Pyro 7. This setup allowed continuous visual oversight without either person stepping away from their task. In a medium where timing and hand placement are critical, the ability to monitor accurately while working proved invaluable.
“Precise, accurate, intuitive technology ready for filmmakers of all levels!” Julian noted.

A Wireless Workflow That Supports Creativity
Rather than treating monitoring as a separate technical step, the Pyro system became part of the creative flow. Adjustments could be made instantly, decisions confirmed visually, and progress evaluated without disrupting the rhythm of animation. “I didn’t know how badly I needed Hollyland products till I started using them. They’ve totally leveled up my game!” Julian added. Kristin Curi, who assisted on camera, echoed this experience, highlighting the visual clarity of the system: “Bright, sharp, clean colors that shine on monitors of all sizes.”

A Compact Ecosystem, Fully Connected

With just two people on set, the combination of Pyro 5 and Pyro 7 formed a compact but complete wireless ecosystem. Monitoring, framing, and collaboration stayed fluid, even in a tight indoor environment. Without cables dictating where people or equipment could stand, the team maintained focus on performance and storytelling rather than logistics.
“Excellent brand that’s sure to upgrade any kit it’s a part of,” Kristin summarized.

Takeaways

For Julian Curi’s paper-cut animation and commercial work, Hollyland’s Pyro 5 and Pyro 7 transformed a small home studio into a highly efficient, cable-free production environment. Wireless monitoring removed physical constraints, preserved creative momentum, and supported the extreme precision required by miniature filmmaking. By keeping technology intuitive and unobtrusive, Hollyland allowed the story to remain front and center, one carefully crafted frame at a time.

]]>
Freedom to Frame: How LARK MAX 2 Empowered Cinematic FPV Drone Flying for Tom Beeken https://www.hollyland.com/blog/application/freedom-to-frame-how-lark-max-2-empowered-cinematic-fpv-drone-flying-for-tom-beeken Fri, 19 Dec 2025 08:06:02 +0000 https://www.hollyland.com/?p=480564 ]]> Hollyland’s LARK MAX 2 wireless microphone system brings dynamic flexibility and precise control to the world of drone videography, allowing professionals to capture high-quality audio and video while pushing the limits of flight distance.

“The whole test was surprisingly simple. Attach the mic, fly the drone, and listen. The product basically proved itself,” says Tom Beeken, a professional videographer and drone pilot from the United Kingdom.

Meet Tom Beeken

Tom Beeken specializes in cinematic FPV (first-person view) flying, creating high-quality video and photography for brands. He handles everything from planning and filming to editing and delivery, with a focus on creative storytelling and dynamic visuals. Tom’s work pushes the boundaries of immersive video content and is built on a foundation of precision and creativity.

Challenge & Setup

Tom’s project involved testing the transmission range of the LARK MAX 2 wireless microphones. He filmed and flew a drone with the microphone attached, providing a live POV split-screen view while tracking the distance in real-time. The goal was to prove the LARK MAX 2’s long-range signal performance in a creative and engaging way.

The project took place outdoors at Hunt Cliff in Saltburn, United Kingdom, with cold temperatures and strong winds making for a challenging environment. Tom chose a larger 7-inch long-range drone to handle the weather while carrying the LARK MAX 2 mic. A small two-person crew, with Tom as the drone pilot and camera operator, and a spotter for safety and line-of-sight monitoring, managed the entire production.

Solving Real On-Set Pain Points

The LARK MAX 2 system was key to overcoming several technical hurdles during this outdoor shoot. Tom attached one microphone to the drone and clipped the second one to his jacket. The system was set to split stereo mode, allowing the crew to record commentary separately from the drone’s motor noise. This setup provided clear audio for post-production, making it easy to analyze audio dropouts at long distances.

Real-time audio monitoring through the earphones gave Tom and the crew immediate feedback on the quality of the sound. “Being able to monitor the audio in real time with the earphones meant we didn’t waste any battery or flight time. We knew instantly that everything was working,” Tom added.

Despite the challenging weather conditions, the LARK MAX 2’s stable signal remained strong, and the drone was able to fly at distances approaching 300 meters, still transmitting clean audio. The crew completed the test without any issues, knowing exactly when the signal changed. “Seeing the drone reach close to 300 meters and still transmit clean audio was seriously impressive. We expected dropouts long before that point,” said the spotter.

Takeaways

For Tom Beeken, the LARK MAX 2 system transformed his approach to long-range drone videography. It allowed him to efficiently test the microphone’s range under real-world conditions, providing stable, low-latency transmission and flexible viewing for multiple departments.

The system’s reliability gave Tom the confidence to focus entirely on capturing the perfect shot, and the ability to monitor audio in real-time meant no wasted time or resources. The LARK MAX 2’s performance proved that it is an essential tool for outdoor shoots and long-range testing.

“Having both my commentary and the drone’s motor sound on separate channels made post-production analysis incredibly easy,” Tom explained. The split stereo channel feature simplified the task of pinpointing when any audio degradation occurred, saving significant time during editing.

The LARK MAX 2 enabled Tom to push the limits of drone videography without worrying about technical limitations, and its robust performance under tough conditions made it an invaluable part of his toolkit.

]]>
Freedom to Frame: How Pyro 5 Transformed On-Set Monitoring for Kauê Zilli https://www.hollyland.com/blog/application/freedom-to-frame-how-pyro-5-transformed-on-set-monitoring-for-kaue-zilli Fri, 19 Dec 2025 07:26:06 +0000 https://www.hollyland.com/?p=480549 ]]> Hollyland’s Pyro 5 wireless video system brings creative freedom and technical confidence to high-end productions, allowing directors and cinematographers to monitor every frame in real time without the limits of cables.

“The Pyro 5 eliminated HDMI cables and let me move the camera freely without worrying about tangling.”

Meet Kauê Zilli

Based in Brazil, Kauê Zilli is a freelance director of photography and camera operator who specializes in high-end drama series, commercials, and feature films across Latin America.
Working with small but highly adaptive crews, Kauê emphasizes cinematic framing, dynamic movement, and controlled lighting, crafting visuals that balance precision with artistic expression.
Recent projects have taken him to Brazil, Mexico, and Argentina, collaborating with medium- to large-scale crews of 30–50 professionals across multiple camera units.

Challenge & Setup

While shooting the feature film My Life with Shurastey, Kauê faced a familiar challenge on modern sets: maintaining freedom of movement while ensuring every department sees what the camera sees. Traditional HDMI cables restricted motion and cluttered the set, slowing transitions between shots.

Filming took place across both studio interiors with complex lighting setups and urban or remote exterior locations, often through long shooting days and night sequences. With multiple camera units working simultaneously, signal reliability and low latency were critical.

To solve these issues, Kauê deployed the Hollyland Pyro 5 system. The transmitter was mounted on the main camera, while the director and key crew monitored the feed on an iPad via the mobile app, and the gaffer used the secondary monitor to adjust lighting.

Solving Real On-Set Pain Points

The Pyro 5 quickly became an essential part of the workflow. In the morning, Kauê and his team built the camera rig, mounting the transmitter on the main camera and linking it to both monitors and the mobile app. The setup was straightforward, and once the connections were established, the crew could immediately check framing and exposure on multiple screens before the first take.

As the day progressed, the team captured a series of interior scenes inside the studio. The Pyro 5 ran continuously for more than six hours, maintaining a strong, clean signal without any dropouts. Even after extended operation, the transmitter stayed cool to the touch, proving its stability during long shooting sessions. The director relied on the iPad feed to review performances between takes, while the gaffer monitored the same image to fine-tune lighting levels in real time. The system’s consistent transmission allowed every department to stay aligned without the clutter and limitations of cabled monitors.

When the production moved outdoors for the evening’s exterior shots on this feature film, the Pyro 5 continued to perform flawlessly. Despite the wider setup and greater distances between camera and monitors, the signal remained reliable and responsive. Quick battery swaps kept the system running smoothly, and the crew completed the night sequence with uninterrupted monitoring from start to finish.

“Booting up takes around 30 seconds, but once running, the Pyro 5 delivers a stable, high-quality image across the entire set.”

Takeaways

For Kauê Zilli, working on the feature film My Life with Shurastey, the Pyro 5 redefined wireless monitoring on professional film sets. It provided stable, low-latency transmission and flexible viewing for multiple departments, transforming collaboration efficiency across camera and lighting teams.

From high-end series to independent features, the Pyro 5 gives cinematographers the freedom to frame, move, and create without compromise, making it an essential part of modern on-set workflows.

“Reliable, stable, and beautifully designed, the Pyro 5 gives me the confidence to focus entirely on the image.”

]]>
A Powerful Partner for How Ridiculous: Helping Australian YouTube Creators Capture High-Impact Stunts https://www.hollyland.com/blog/product-info/a-powerful-partner-for-how-ridiculous-helping-australian-youtube-creators-capture-high-impact-stunts Wed, 17 Dec 2025 06:26:10 +0000 https://www.hollyland.com/?p=479589 ]]>
User Profile

How Ridiculous creates large scale stunts and epic moments for their fans. They drop objects from extreme heights and pull off all kinds of outrageous experiments. Their YouTube channel, How Ridiculous, is known for these massive, physics-defying challenges. For the episode “Can 1000 Exercise Balls Bounce a Falling Car?”, the team filmed in the Perth Hills of Western Australia. The shoot ran from 9 a.m. to 7 p.m. in a wide-open rural field that stretched roughly 300 × 300 meters, giving them plenty of room to pull off their biggest ideas.
You can watch this episode at the following link: https://www.youtube.com/watch?v=rrVLYk9TUlI

Usage Scenario

Filming stunts at this scale comes with several challenges. Sometimes camera operators have to mount high-speed cameras in hard to reach places to capture the best angles for stunts. Immediately after the stunt, often the How Ridiculous hosts react to the footage from these stunts, and these reactions ultimately become an important part of the final video. Camera operators were previously limited to previewing only one camera via a wired connection, which made on-site reviews less efficient. In this particular shoot, the high-speed cameras had to be mounted two meters above the ground to follow action taking place above a giant ball pit.

The Cosmo C2 removed that limitation by allowing cameras to be placed anywhere without running cables. Its dual-input wireless transmission enabled live feeds from both high-speed cameras—the Phantom and the Ember—to be sent to one centralized monitor. This wireless setup allows key staff to view both camera feeds together in real time from a safe and comfortable location. With the Cosmo C2, the team captured the best angles and still previewed the footage with ease.

Coordination across this field was equally crucial. Stunts often take place in large outdoor environments where seamless, reliable, real-time communication is critical for setting up shots and synchronizing timing across multiple camera angles. For this shoot, the team needed tight coordination as multiple objects—cars, boats, a piano, and more—were dropped by a crane into the giant ball pit. The Solidcom C1 Pro headsets allowed the camera operators, the Head of Production, and the Creative Director to coordinate effortlessly and ensure every moment is captured with precision and consistency, so that no stunt was mistimed and no camera angle was missed.

Featured Products

Cosmo C2

Solidcom C1 Pro

User Experience

“Hollyland gear is game-changing. We love how the Solidcom C1 Pro enables us to communicate reliably over long distances to coordinate our stunts. It has made it so much easier to confirm that all cameras are rolling before a drop.”

– Josh, Head of Production

“The Cosmo C2 allows us to place cameras in the optimal position to capture the best shot possible. Previously, we always had to worry about running cables, but this is a thing of the past.”

– Jack, Editor and Camera Operator

“The Cosmo C2 is an absolute GAMECHANGER”

– Toby, Editor and Camera Operator

“The Solidcom C1 Pro is amazing. I love it.”

– Joseph, Editor and Camera Operator

Overall Experience

During the YouTube stunt filming project, the team found that the Cosmo C2 and Solidcom C1 Pro helped their workflow. The Cosmo C2 broke free from the limitations of wired camera connections, and the Solidcom C1 Pro solved communication challenges in the large outdoor space. The two systems reduced setup and confirmation time and minimized missed shots. The team could focus more on the shoot itself and capture every epic moment their fans love.

]]>
A Reliable Communication Partner for Denis Zuran and His Team in Cinematic Content Production https://www.hollyland.com/blog/application/a-reliable-communication-partner-for-denis-zuran-and-his-team-in-cinematic-content-production Mon, 15 Dec 2025 06:32:54 +0000 https://www.hollyland.com/?p=478754 ]]>

User Profile

Denis Zuran is a Slovenia-based content creator working across filmmaking, photography, and marketing. His channel, Luminous Media, is a two-person team focused on photo/video gear reviews, creative challenges, and hands-on industry content, with additional crew brought in when needed. He also works as a content creator at Funkit Marketing, a full-service marketing agency with a 3-person photo/video department and a wider team of 12 professionals. Across all his projects, Denis prioritizes cinematic visuals, clear communication, and efficient production workflows.

Usage Scenario

The project was a combination of gear review and field testing for the Solidcom SE system, produced for the Luminous Media channel. It tested the intercom in real filmmaking situations including commercial-style shoots, dynamic outdoor filming, and small team productions.
The project aimed to demonstrate how clear communication improves efficiency on set and show the headset’s reliability for small creator teams.


Filming took place in Jeruzalem, Slovenia. The work environment included both indoor and outdoor locations. The outdoor shoot involved morning mist and heavy fog, presenting challenging moisture and visibility conditions, to test water resistance and durability. Indoor setups provided controlled lighting for talking-heads and product shots. A total of four team members used the Solidcom SE, with headsets allocated to the director, FPV drone pilot, car driver, and camera/gimbal operator.

Hollyland Products in This Project

Hollyland Solidcom SE – 4S Global Version

User Experience

“Honestly, we could actually talk and get things done without yelling across the set — game changer!”

– A crew member

“It made coordinating the drone, car, and camera so much smoother. I can’t imagine shooting without it now. Also, they look badass!”

– A crew member

“The audio was super clear, even in the fog — we didn’t miss a single cue.”

– A crew member

“Using these headsets just made everything easier and way more fun on set.”

– A crew member

Overall Experience

Key features that mattered most to the team included:

  • Battery Life
  • Operating Range
  • Wearing Comfort
  • App Control
  • Noise Cancellation
  • Water Resistance

]]>