Quantcast
Channel: 声网
Viewing all 197 articles
Browse latest View live

Agora.io Fuels Faster Time-to-Market for Mobile App Developers with Agora Partner Gallery

$
0
0

SAN FRANCISCO, Oct. 1, 2019 /PRNewswire/ — Agora.io, the leading voice, video and live interactive streaming platform, today announced the launch of Agora Partner Gallery, a network of preferred development, technology, and platform vendors, to help Agora’s customers speed the development and launch of mobile, web, and desktop applications with real-time engagement features.

Agora customers seek technology solutions that enable them to bring their offerings – including mobile apps and software programs – to their end customers more quickly and efficiently within a highly competitive market space. The Agora Partner Gallery offers Agora customers direct access to a network of development and technology partners who provide add-on solutions and services that further accelerate go-to-market time with comprehensive offerings, all with the added benefit of full compatible with the Agora real-time voice and video SDK.

“We understand adding real-time voice and video to an app is complex, and increasingly our customers ask for other complimentary real-time engagement features to complete their solution, which adds another layer of complexity to the project,” said Virginia Liu, SVP of Marketing and Ecosystems at Agora, “The new Agora Partner Gallery connects our customers with technology providers and development agencies that specialize in real-time engagement, so they can get the additional features and support they need to not only create an engaging app experience, but also get the app in the hands of their users faster.”

All partners in the Agora Partner Gallery are required to go through a verification process to ensure they are compatible with the Agora SDK. Additionally, Agora requires all partners provide a demo app or customer reference as based on the Agora SDK to demonstrate their technical expertise. Current participating partners in Agora Partner Gallery include a variety of technology providers ranging from face filters, avatar, voice changer, voice and video analytics, to white board collocation and PSTN connect. A full list of partners can be found here.

Interested in becoming a featured partner in the Agora Partner Gallery? Learn more about our partner program here or apply today to become a development or technology partner.

About Agora.io

Founded in 2014, Agora.io is a global company with offices in Santa Clara, London, Bangalore, and Shanghai and customers in over 100 countries. Agora.io offers a real-time engagement platform-as-a-service that allows developers to easily embed voice, video, interactive streaming, and messaging for any mobile, web or desktop application and go live globally in a matter of days.

With over 20 billion minutes of monthly usage on our network, Agora.io is trusted by developers and business managers and powers live streaming and video interaction for leading social and enterprise brands across the globe, with use cases in a wide variety of industries such as social, gaming, workflow collaboration, enterprise training & branding, e-commerce, healthcare and more. Agora.io services are backed by an SLA, priced very competitively, and GDPR compliant.

The post Agora.io Fuels Faster Time-to-Market for Mobile App Developers with Agora Partner Gallery appeared first on Agora.io.


Disrupting the Future of Enterprise Video Collaboration at BoxWorks 2019

$
0
0

The internet and the proliferation of cloud computing technology have transformed the way we work. Not only has our work become borderless, but it is now more streamlined than ever before.

Unlike the siloed platforms of the past, today’s enterprise software is always connected, updated in real time, and interoperable with the tools used across every department. All of this results in workflows that are more efficient, transparent, and collaborative, which is the catalyst to true digital transformation.

At Agora.io, we’re constantly reimagining how real-time communications (RTC) features like voice, video, and interactive streaming will further enhance the tools we use to work. The way we see it, enterprise video conferencing platforms only begin to scratch the surface of what’s possible with RTC. That’s why we’re excited to explore the latest innovations in enterprise productivity as a proud sponsor of BoxWorks 2019.

Why BoxWorks?

Hosted by Cloud Content Management platform Box, BoxWorks brings together “thought leaders and disruptors across the industry as they share the knowledge and insights you need to transform how your business operates.” Agora and Box share a passion for seamless enterprise collaboration and we both envision a future of frictionless workflows.

Get an exclusive look

Our Chief Operating Officer Reggie Yativ will be taking the stage to announce the latest advancement to enterprise productivity. While we can’t spill the beans just yet, he is unveiling a new integration that will transform the way people collaborate inside Box. If you’re attending BoxWorks, be sure to join us for Reggie’s session on October 3, 2019, at 1:30 PM PST at the Partner Pavilion, Level 1 for an exclusive, first-hand look.

Meet the Agora team

The Agora.io team will also be on hand throughout the conference to showcase our high-quality, ultra-low latency video solutions. Stop by booth S4 to learn how you can integrate live voice and video into your business’s workflows. Come talk to our team, ask questions, and score some swag!

The post Disrupting the Future of Enterprise Video Collaboration at BoxWorks 2019 appeared first on Agora.io.

Agora.io Thinks Inside the Box with New Integrated Video Capabilities

$
0
0

Company brings video conferencing directly to Box

SAN FRANCISCO, Oct. 3, 2019 /PRNewswire/ – BoxWorks 2019 – Agora.io, the leading voice, video and live interactive streaming platform, today announced its integration with Box, a leader in cloud content management, to integrate video conferencing within the Box workflow.

The integration allows Box users around the world to collaborate with real-time video and voice in a whole new way without the need for a third-party video conferencing service. Agora is a sponsor of BoxWorks 2019, Box’s annual conference in San Francisco on October 3-4, and will be providing a demo of the integration on October 3rd inside the Partner Pavilion on Level 1 at 1:30 p.m. PT.

The Agora integration in Box is a free reference application using the Agora SDK, that showcases the power of having a fully customizable video solution. These unique features include:

  • The ability to initiate a video call while viewing and working on shared content
  • Transcribe and record the conversations directly inside your Box account

The transcription service is powered by IBM Watson through the Box Skills Kit. Box Skills provides a voice transcription service utilizing natural language processing (NLP) to collect and store each conversation on demand. The kit also predicts conversational keywords to help readers find and access important data discussion points.

“Organizations around the globe are seeing an uptick in the number of remote employees. Our integration with Box will enable distributed workforces to come together and collaborate in real-time, wherever they are in the world,” said Reggie Yativ, chief operating officer and chief revenue officer at Agora. “Box revolutionized the way in which teams work together and we’re excited to further enhance their solution with real-time video, and to hear how their users reap the benefits of seamless team connectivity.”

Among the 70 percent of the global workforce who work remotely at least once per week, 88 percent cite that virtual teamwork is critical to their productivity. With enhanced team connectivity through the Agora solution, remote workforces can communicate and collaborate on any document from anywhere in the world.

Agora will be at BoxWorks 2019 this week. Stop by Booth #S4 for more information on this integration and what it could mean for your business.

About Agora.io

Founded in 2014, Agora.io is a global company with offices in Santa Clara, London, Bangalore, and Shanghai and customers in over 100 countries. Agora.io offers a real-time engagement platform-as-a-service that allows developers to easily embed voice, video, interactive streaming, and messaging for any mobile, web or desktop application and go live globally in a matter of days.

With over 20 billion minutes of monthly usage on our network, Agora.io is trusted by developers and business managers and powers live streaming and video interaction for leading social and enterprise brands across the globe, with use cases in a wide variety of industries such as social, gaming, workflow collaboration, enterprise training & branding, e-commerce, healthcare and more. Agora.io services are backed by an SLA, priced very competitively, and GDPR compliant.

Release: prnewswire

The post Agora.io Thinks Inside the Box with New Integrated Video Capabilities appeared first on Agora.io.

Recording Live Streams with an On-Prem Recording Service with C++ and Docker

$
0
0

Live streaming apps are growing in popularity and one of the most requested features is recording the streams for later use.

Today we’ll go through the steps to build a Dockerfile to easily deploy an Agora Recording Server within your backend orchestration.

Pre-Requisites

  • A basic understanding of Docker
  • Docker locally installed
  • A developer account with Agora.io
  • An Agora.io based live streaming or communications app

Setup Project

Let’s start by opening our favorite code editor and creating our Dockerfile.

We’ll set our container to use the latest Ubuntu image as its base, run apt update to get all updates, and then we’ll add the commands to install our dependencies.

# use latest ubuntu base image
FROM ubuntu:latest
RUN apt update
# install any needed packages/tools RUN apt install -y git RUN yes | apt-get install build-essential RUN apt-get install gcc

For debug purposes, let’s also add in support for core dumps.

RUN ulimit -c unlimited

Next we will download the Agora Linux SDK and unzip it to the root directory.

# download agora sdk zip
ADD https://download.agora.io/ardsdk/release
/Agora_Recording_SDK_for_Linux_v2_3_4_FULL.tar.gz

# unzip sdk
RUN tar xxzf Agora_Recording_SDK_for_Linux_v2_3_4_FULL.tar.gz

Once the SDK is unzipped, we’ll need to switch the working directory to samples/cpp and run make

# go to c++ sample directory
WORKDIR Agora_Recording_SDK_for_Linux_FULL/samples/cpp

# run make
RUN make

Now we need to make sure we have an Agora AppID, because the Recording Server will need to use it along with the Channel name to join the channel.

# run recording server
CMD ./recorder_local --appId <YOUR_APPID> --uid 0 --channel demo
--channelProfile 0 --appliteDir ../../bin

Note: there are two channelProfiles: 0=Communication, 1=Live broadcast

Lastly we’ll expose the necessary ports.

# receiving ports
EXPOSE 4000/udp
EXPOSE 41000/udp

EXPOSE 1080/tcp
EXPOSE 8000/tcp

EXPOSE 4001-4030/udp
EXPOSE 1080/udp
EXPOSE 8000/udp
EXPOSE 9700/udp
EXPOSE 25000/udp

Now we are ready to build and run our container.

docker build -t agorarec . && docker run -it agorarec

Now using our Agora Live Streaming web or native app, we can test the recording. If you don’t have an app built or want to test quickly for this tutorial, we can use the group chat web app (https://agora-group-video-chat.herokuapp.com) that we previously built together. Once the first user enters the room, you should see the output from the container in your console.

testing my build with 3 streams in the channel

When testing with 3 streams, your logs should look similar to those below.
User 2199979035 joined, RecordingDir:./20190918/
demo_225632_937626000/
onFirstRemoteAudioFrame,User 2199979035, elapsed:331
User 1503933393 joined, RecordingDir:./20190918
/demo_225632_937626000/
onFirstRemoteAudioFrame,User 1503933393, elapsed:339
pre receiving video status is 0 now receiving video status is 1
pre receiving audio status is 0 now receiving audio status is 1
User 4190354927 joined, RecordingDir:./20190918
/demo_225632_937626000/
onFirstRemoteAudioFrame,User 4190354927, elapsed:18291
User 4190354927 offline, reason: 0
User 2199979035 offline, reason: 0
User 1503933393 offline, reason: 0
pre receiving video status is 1 now receiving video status is 0
pre receiving audio status is 1 now receiving audio status is 1
stopre receiving video status is 0 now receiving video status is 0
pre receiving audio status is 1 now receiving audio status is 0

Next Steps

If you’d like to compare Dockerfiles 😉, you can scope mine below.

View the code on Gist.

The last step would be to deploy this Dockerfile to your cloud services provider to be triggered by a Serverless function or possibly an always-on service. The choice is up to you!

For more information about the Agora.io On-Prem Recording SDK, please refer to the Agora.io On-Prem QuickStart and API Reference.

The post Recording Live Streams with an On-Prem Recording Service with C++ and Docker appeared first on Agora.io.

Better Monetize Your Social App with Live Engagement Features

$
0
0

Almost half of the total world population—nearly 3.48 billion people—are using some form of social media. While giants like Facebook, Twitter, and Snapchat have hundreds of millions, even billions of users, niche apps are building their own small but dedicated communities as well.

No matter the size of an app’s user base, it has to make money to survive. Many social apps are free to use and depend largely on advertisers or paid features for revenue, so finding ways to increase in-app engagement and create revenue streams outside of downloads is crucial to the app’s long-term success. That’s why many developers incorporate live engagement features in their social apps.

Plan on building your own social app? Implementing live video, voice, and interactive streaming features into your user experience can help you stand out amongst the competition and open a variety of engagement-driven revenue streams.

More Engagement Leads to More Revenue

The goal of any app is to increase engagement among users and with the app itself, but social apps are particularly dependent on their engagement metrics, as the numbers are used to drive in-app advertising and other monetization opportunities. However, finding a way to keep users engaged and encouraging them to return to the app beyond the initial download can be difficult. That’s why more social apps are implementing live features—live interactive streaming and live group chat into the user experience.

For example, MeetMe created “Live,” a feature that allows users to live stream, watch other live streams, invite others to join their live streams, and chat with other users in real time. Introduced in March 2017, Live has significantly increased user engagement, with 25 percent of MeetMe’s active users taking advantage of live video daily as of April 2018. The Meet Group has also earned over $29 million in annualized revenue since adding the feature.

Better User Retention Leads to More Opportunities

Another benefit of implementing real-time features likes live voice and video chat is an increase in user retention. Rather than use text messaging, FaceTime or another app to communicate with their friends and family, users are compelled to communicate within the social app for a more convenient experience. This leads to an increase in usage time, which can help developers and product owners increase their advertising revenue.

While there are several dedicated voice and video messaging apps such as WhatsApp and Zoom, many have faced scrutiny for their lack of security and data practices. That’s why many users are willing to pay for the live voice and video calls—especially if the quality of experience and service is high. This presents a unique opportunity for social apps that uphold strong security practices to monetize their live video and voice offerings.

The More Interactive the Experience, the Better

Live engagement is about more than just adding real-time video and voice to an app. By incorporating add-on options and in-app purchases, developers and business managers can encourage even more user interactions and build new monetization channels outside of in-app advertising.

Here are some examples of live engagement features making waves in the social space.

  • Virtual gifting. What started as a trend on international live streaming platforms has become increasingly popular in the U.S. and other parts of the world, especially as more social apps look for ways to pay their creators for their contributions to the community. Virtual gifts, which are typically non-physical objects that are purchased by users and then exchanged for real cash, are a win for all parties involved. Users can interact directly with their favorite broadcasters, broadcasters can earn revenue for their efforts, and developers and business managers can drive more user engagement and increase app stickiness.
  • Virtual filters and stickers. Animated filters and stickers have become very popular amongst social app users. While they are often offered free, platforms like Snapchat offer them as in-app purchases or give businesses the opportunity to create their own at a price. In fact, Snapchat’s viral AR filters were a key growth factor in the company’s successful Q2 this year, helping the company attract 7 to 9 million new users.
  • Voice-changing filters. Like animated filters and stickers, voice effects offer users a way to customize their social experience. Sound mixing and sound reverb make live streams and video calling more fun for everyone tuning in to watch a user’s live show or broadcast.

Integrate Your Own Live Engagement Features

No matter what type of social app you want to create, Agora offers a variety of high-quality, scalable real-time communications solutions that you can quickly and easily integrate to create a truly engaging user experience. Agora’s SDKs come with a powerful tech stack for delivering live video and voice directly from inside your app, while Agora’s Software-Defined Real-Time Network™ (SD-RTN™) ensures that every user sees and hears clearly in real time.

To make the most of real-time engagement experiences and enable its customers to monetize their apps, Agora offers over 600 different AR stickers, virtual gifting capabilities, animated filters, and voice-changing filters—all of which can be customized for your social app.

To learn more about Agora can help you engage with your users and monetize your social app, talk to our team.

The post Better Monetize Your Social App with Live Engagement Features appeared first on Agora.io.

Agora.io Partners with Global IT Provider Stefanini to Expand Live Video Solutions in Europe, Brazil

$
0
0

SANTA CLARA, CA — OCTOBER 17, 2019 — Agora.io, the leading voice, video, and live interactive streaming platform, today announced a partnership with Stefanini, a $1B global IT provider, to expand its reach in Europe and Brazil. As a strategic partner in the two regions, Stefanini will give its current and future customers access to cutting-edge real-time video solutions powered by Agora’s global real-time communications (RTC) network.

For more than 30 years, Stefanini has helped companies around the world undergo digital transformation through its broad portfolio of solutions that combine innovative consulting, marketing, mobility, personalized campaigns, and artificial intelligence services. By adding Agora’s live video capabilities to its offerings, Stefanini will give its customers the ability to implement real-time engagement solutions into their existing web, mobile, desktop, and IoT experiences.

“Our mission at Stefanini is to guide our customers through their digital transformation journeys, disrupting and converting new ideas into bespoke, actionable business realities,” said Farlei Kothe, CEO EMEA at Stefanini. “We are proud to work with a wide range of partners in order to meet all of our customer needs and now, thanks to our new partnership with Agora, the ability to offer real-time video solutions is sure to be a vital asset to our work.”

To help customers implement Agora’s live video solutions, Stefanini will employ a dedicated developer team focused on the emerging live video market. Members will be trained as ‘Agora Experts’ and serve as an extension of Agora’s team to help customers with the integration process. As part of the Agora Partner Program, Stefanini will receive technical training and support, access to a vast library of resources, priority access to product roadmaps, and lucrative revenue share.

In the five years since its launch, Agora has helped customers in over 100 countries and over 200,000 developers embed real-time engagement and communications solutions into their applications and platforms. By working with local partners like Stefanini, Agora aims to reach a plethora of new customers who want to create new channels for communication and increase engagement among their users and customers.

“Stefanini is a highly influential IT provider in two of our key markets, Europe and Brazil, so we couldn’t be more thrilled to partner with such an innovative company that is a true leader in the space,” said Reggie Yativ, COO & CRO at Agora.io. “As we aggressively expand our services and solutions globally, we look forward to partnering with many exceptional organizations that share our vision for a more connected world through real-time engagement and innovation.”

The underlying technology at the core of the new partnership is Agora’s Voice & Video SDK, which allows developers to embed crystal-clear and real-time video conferencing and interactive streaming into any application. The partnership allows Stefanini and its customers to tap into Agora’s network of 200 globally distributed data centers and proprietary Software-Defined Real-Time Network (SD-RTN™) to connect users globally via high-quality, real-time video communications.

To learn more about the Partner Program, visit the click here.

About Agora.io

Founded in 2014, Agora.io is a global company with offices in Santa Clara, London, Bangalore, and Shanghai and customers in over 100 countries. Agora.io offers a real-time engagement platform-as-a-service that allows developers to easily embed voice, video, interactive streaming, and messaging for any mobile, web or desktop application and go live globally in a matter of days.

With over 20 billion minutes of monthly usage on their network, Agora.io is trusted by developers and business managers and powers live streaming and video interaction for leading social and enterprise brands across the globe, with use cases in a wide variety of industries such as social, gaming, workflow collaboration, enterprise training & branding, e-commerce, healthcare and more. Agora.io services are backed by an SLA, priced very competitively, and GDPR compliant.

The Agora.io platform is powered by the Software Defined Real-Time Network (SD-RTN™), a global delivery network of 200 data centers. SD-RTN™ dynamically manages the routing of voice and video to overcome severe packet loss incidents and enables seamless, uninterrupted, high-quality real-time streaming delivery across the globe, even in the most remote locations and emerging markets.

About Stefanini

Stefanini (www.stefanini.com) is a Brazilian multinational with 30 years of experience in the market, investing in a complete innovation ecosystem to meet the main verticals and assist customers in the process of digital transformation. With robust offerings aligned with market trends such as automation, cloud, Internet of Things (IoT) and user experience (UX), the company has been recognized with several awards in the area of innovation.

Today, the company has a broad portfolio of solutions that combine innovative consulting, marketing, mobility, personalized campaigns and artificial intelligence services for traditional solutions such as service desk, field service and outsourcing (BPO).

With a presence in 40 countries, Stefanini was named the fifth most internationalized company, according to the Dom Cabral Foundation ranking of 2017.

Release: prnewswire

The post Agora.io Partners with Global IT Provider Stefanini to Expand Live Video Solutions in Europe, Brazil appeared first on Agora.io.

Machine Learning for AOM/AV1 and its Application in RTC

$
0
0

We continue our review of the All Things RTC conference with a play-by-play of the informative, invaluable, and inspiring presentations that were given during the event. Recently, we went back over the keynote speech by Debargha Mukherjee, discussing the history of AV1 and its ongoing impact on the industry.

In parallel to this, Zoe Liu, the co-founder and president of the Visionular startup, also gave a talk titled “Machine Learning for AOM/AV1 and its Application in RTC.”

Here, she used Debargha’s presentation as a launchpad and showed how her company had started to harness machine learning to advanced AV1 development and already lay the groundwork for AV2 development.

Zoe discussed how Visionular was one of the 42 members of AOM, the organization founded to build royalty-free codecs and promote a royalty-free ecosystem. Her company has focused on AV1’s code efficiency and potential capacity for

Machine Learning Informs Encoding Progress
Machine learning can be used primarily to improve encoder performance, speed, and video enhancement on a variety of levels, including frame prediction and frame synthesis, optimizing the AV1 codec for future purposes.

Machine Learning for Encoder Speed
By applying machine learning to encoder processing on the partition side, Visionular has been able to see 30-50% speed improvements on average. Usually, there is a lot of video partitioning going on, which is quite time consuming. By applying a new network, a new frame can be introduced with a partition map supplied far faster than ever before.

Machine Learning for Mode Selection
Mode selection is another immediate application for machine learning and AV1, with empirical encoding rules able to be replaced by neural-network-based decisions. This lets modes be transformed or switched far faster and more seamlessly, with less user oversight.

Machine Learning for Frame Prediction
Machine learning can be fed a stream of video frames that, even though they may have lower quality or gaps in the feed, can have quality restored or have missing areas filled in for better consistency. Overall video appearance and performance can be upgraded, thanks to neural network training and processing, which can even synthesize entirely new frames based on video histories.

These are just a few of the growing applications that machine learning provides with AV1. Visionular has also developed its own version called dav1d, a support process for AV1 that is exceptionally fast, scalable, and production ready. It’s also compatible with most major browser platforms, with more being added every day.

Zoe finished by introducing the Visionular team, a small group of experts and an aggregation of codec programmers and machine learning pioneers. They are all contributing to the AOM organization and are finding new ways to shrink video size while maintaining or improving quality.

Click here to watch the full presentation.

The post Machine Learning for AOM/AV1 and its Application in RTC appeared first on Agora.io.

The Future of Social Gaming With Voice and Video

$
0
0

Josh Constine, the editor-at-large at TechCrunch, moderated a discussion between Or Ben Shimon, the CEO of Comunix, Selcuk Atli, the CEO at Bunch, and Parijat Bandyopadhyay, the principal technical architect of Mech Mocha. Constine has a reporting focus on startups that emphasize research and development into virtual reality, augmented reality, gaming, and social networking—the emphasis of their panel at the AllThingsRTC conference.

Constine pointed out the amazing evolution of gaming from bit sprites on consoles to the most recent creations of the panelists’ various companies. This is creating a whole new form of connection (and even intimacy) with other players and the game experience itself, which in turn drives higher usage times, and monetization and business opportunities.

What is Being Built?

For instance, Bunch is building apps for live mobile games, almost like having Xbox live on smartphones. Young gamers jump into it on their devices and collaboratively play while talking with one another.

Comunix has developed an app called PokerFace, the first social poker game with video chat interaction around the virtual table. And Mech Mocha is focusing more on the social gaming platform with video capabilities.

Why Do Voice and Video Matter so Much to Games Now?

The main point was that games are intended to be communal activities from the first place, with tabletop and board games bringing friends and families together for shared experiences. As more and more games are being built for smart devices, mobile devices, and laptops or desktops, regaining that sense of actual personal interaction has become increasingly important. Voice and video capabilities provide a stronger connection despite physical distance.

Being able to laugh with or yell at other players is important, after all!

It also heightens the digital gaming experience. When you realize you’re engaging with real players, and not just a generic avatar, it makes it more fun and compelling.

Plus video games with real-time communication lets players meet new people from across the world and make actual connections and conversations. There’s an emotional resonance when that other player looks like they’re right across the table from you.

Twitch was brought up as a precursor to the more RTC gaming experience, with people able to watch someone play a game and comment on the activity—but the engagement was still distant from real connection and still lacked the personal involvement since there’s less of an established relationship with the gamer and their audience.

What is the Evolution of Social Gaming?

Social gaming used to be just things like sharing screenshots of a high score or telling them of a win after the fact. Now it is immediate and everyone can be increasingly involved and share in the sense of victory (or loss).

Social gaming will continue to split into two main arenas: public gaming, where multiple players are being watched and interacted with by a global audience (such as e-sports), and private games, where a game platform is shared by a select group of users.

People want to share experiences. We want to play together, and accomplishing something in a game (or even experiencing a loss in-game) is much more powerful when many people are involved in the whole.

Social Games and Real-World Impact

There are many ways social gaming is already changing our real-life interactions, such as online bullying, youth slang, and virtual economies dominating a shifting landscape where what happens in-game can have reverberations throughout our daily lives. Online celebrities and e-sports players are becoming increasingly popular and gaining cultural awareness, and video and voice are empowering these huge shifts.

Where will social games go from here with the help of RTC developments? For developers, a big part of it is protecting themselves from platform risk, securing revenue channels, and protecting the players to ensure that games remain fun even as the technology continues to evolve.

For the full video, go here!

The post The Future of Social Gaming With Voice and Video appeared first on Agora.io.


Building a One-to-Many iOS Video App with Agora

$
0
0
Want to build a video chat app within an hour? View our guide to learn how to quickly and easily create a video chat app that can support multiple participants using the Agora Video SDK.

Requirements

  • Xcode 10.0+
  • A physical iOS device. The iOS simulator lacks camera functionality.
  • CocoaPods (If you don’t have CocoaPods installed already, you can find instructions here).
  • An Agora account (You can sign up for free here).
  • An understanding of how to build iOS layouts with a Storyboard. If you need a refresher, there’s a great tutorial here.

Setting up the Agora Library with CocoaPods

  1. Create a new Xcode project (or use an existing one).
  2. In Terminal, navigate to the root directory of your project and run pod init to initialize CocoaPods.
  3. Open the Podfile that was created and add the following code to import the Agora library:
target 'Your App' do 
  pod 'AgoraRtcEngine_iOS' 
end
  1. Run pod install in Terminal to install the library.
  2. From now on, open YourApp.xcworkspace to edit and run your app.

Add Camera and Microphone Permissions

In order to use the microphone and camera, we’ll need to ask the user for permission to do so. In your Info.plist add the following keys:

Privacy - Microphone Usage Description 
Privacy - Camera Usage Description

Make sure you add a value for each. These values are user-facing, and will be displayed when the app asks for these permissions from the user.

Setting up the Scene

In our Main.storyboard we’ll need to add the views Agora will use to display the video feeds. For our demo, we’ll be using a single large view to display our local feed, and a collection view to show an arbitrary number of remote users, but feel free to adjust as necessary for your own needs.

The local view is in green, and the remote view template is in red, for ease of identification. Add a View object for the local stream, a UIButton to mute and hang up the call, and a UICollectionView to hold the remote streams. Your UICollectionViewCells can be as simple as a single view to hold the stream — in the example above, I’ve added an overlay to show the remote user’s name if we know it.

Make sure you hook up the views in your main View Controller, and set the View Controller as the UICollectionView’s delegate and dataSource:

View the code on Gist.

View Controller Storyboard hookups

And connect up your custom collection view cell:

View the code on Gist.

Collection View Cell Storyboard hookups

Tip: If you want to add overlays to your video streams, make sure you don’t add them as subviews of the view objects you’re going to use as video screens. The video canvas will be drawn on top of them. Add them as sibling views instead.

Initialize the Agora Engine

In order to use the Agora engine, we need to create an instance of AgoraRtcEngineKit with our app ID.

First, we will need to retrieve our app ID by going to the Agora Dashboard. If you haven’t created an Agora project yet, do so now by clicking “New Project.”

Once you have a project, click the “Edit” button (or open the Project Management pane) to view that project’s details. Copy the app ID and add it to your project. If you enabled the App Certificate, you’ll also need a Token to join channels — you can generate a temporary one by clicking “Generate Temp Token.” You can also read our tutorial on generating your own tokens here.

The first call you make to Agora must be to initialize a shared Agora engine. We’ll make sure to do this by creating a helper function that initializes the engine if we haven’t done so yet, and just returns it if we have. That way we can just call it whenever we need a reference to the engine without having to worry about who does it first.

View the code on Gist.

Agora Engine initialization

Tip: This is a quick way to ensure the engine is only initialized once when you need it, but for a larger app you may want to consider wrapping it in a Singleton instead, so you can easily access it from anywhere.

We’ll also need to implement the AgoraRtcEngineDelegate protocol so we can respond to relevant callbacks:

View the code on Gist.

RtcEngine Delegate extension

Enable Video

The next step is to tell Agora we want video enabled, and to tell it where to put the local video stream. We can then call this function from our viewDidLoad().

View the code on Gist.

Video setup

Tip: If you want to customize how the video is displayed, this is a good place to configure the video profile.

Join a Channel

Once the engine is initialized, joining a call is as easy as calling joinChannel() on the Agora engine.

View the code on Gist.

Join a channel

Setting up Remote Video

Now is the time to put our UICollectionView to good use. We’ll keep a list of remote user IDs, and for each one, set up a remote video canvas within our collection.

View the code on Gist.

Collection view callbacks

Tip: Remember to set your custom cell’s reuse identifier in your Main.Storyboard!

To get this list of userIDs (and maintain it), we’ll utilize the rtcEngine(didJoinedOfUid:) and rtcEngine(didOfflineOfUid:) callbacks. Inside your AgoraRtcEngineDelegate extension, add the following functions:

View the code on Gist.

Agora engine callbacks

And with that, you have a working video chat app. Beware of audio feedback if testing on multiple devices at once.

Polish

There a few more pieces that we should add in to make our app a little nicer. For one, our buttons don’t do anything. Lets’s fix that first. Enabling the mute button is a simple call to muteLocalAudioStream():

View the code on Gist.

Mute button

We can also hang up by calling leaveChannel():

View the code on Gist.

Hang up button

Tip: If you don’t hide the local video view (or pop the view controller) you’ll end up with a static view of the last frame recorded.

As a final touch, let’s take advantage of Agora’s ability to join channels with a username to give our remote streams some nice nameplates. We can update joinChannel() to join with a username if we have one (how you acquire those usernames is left up to you):

View the code on Gist.

Join a channel with a username

Tip: Agora recommends you call registerLocalUserAccount() before joining a channel with a username for better performance, but it’s not necessary.

And then we can extract that username when a remote user joins. Add the following block to collectionView(cellForItemAt:)

View the code on Gist.

Extract remote usernames

And we’re done! You can find the completed app here. Thanks for following along, and happy coding!

The post Building a One-to-Many iOS Video App with Agora appeared first on Agora.io.

Adding Video Communication to A Multiplayer Mobile Unity Game

$
0
0

Do you ever imagine when you play against your friends on a multiplayer game on your lovely mobile device, you want to see each other’s facial expression or tease each other with jokes and funny faces? You’ve found a solution here without leaving the game itself to another chat App. In this tutorial we are going to take Unity’s popular Tanks game to the next level and make it into a game with live video chats!

Before we get started, there are a few prerequisites for anyone reading this article.

Prerequisites

Project Setup

If you plan to use your own existing Unity project, go ahead and open it now and skip down to “Integrating Group Video Chat”.

For those readers that don’t have an existing project, keep reading; the next few sections are for you. (Note, this step is exactly the same to the Project Setup you may find in or have done most of the set up by following Hermes’ “Adding Voice Chat to a Multiplayer Cross-Platform Unity game” tutorial.

New Unity Project

Please bear with me as the basic setup has a few steps and I’ll do my best to cover it swiftly with lots of images. Let’s start by opening Unity, creating a blank project. I recommend starting this project with the latest Unity 2018 LTS version.

Create a new project from Unity Hub

Download and import the “Tanks!!! Reference Project” from the Unity Store:

Searched by “Tanks Reference” and download this asset

When Unity prompts for if you want to overwrite the existing project with the new asset, click Yes. Furthermore, accept the API update prompt that will come up next.

Theres a couple more steps to getting the Tanks!!! reference project ready for building on mobile. First we need to enable Unity Live Mode for the project through the Unity dashboard. (select project → Multiplayer → Unet Config).

Set max players to 6 even though Tanks!!! limits the game to 4 players and click save

Once Unity Live Mode is enabled

Building for iOS

Now that we have Unity’s multiplayer enable, we are ready to build the iOS version. Let’s start by opening our Build Settings and switch our platform to iOS and build the project for testing.

Update the Bundle id and Usage Descriptions

Please note: you need to have Xcode installed and setup before attempting to build the project for iOS.

When building for the first time, create a new folder “Builds” and save the build as iOS

After the project has successfully built for iOS, we will see the project in the Builds folder

Let’s open Unity-iPhone.xcodeproj, sign, and build / run on our test device

Enable automatic signing to simplify the signing process. Remove In-App-Purchase if it shows up.

Don’t start celebrating just yet. Now that we have a working iOS build we still need to get the Android build running.

Building for Android

Android is a bit simpler than iOS since Unity can build, sign, and deploy to Android without the need to open Android Studio. For this section I’m going to assume everyone reading this has already linked Unity with their Android SDK folder. Let’s start by opening our Build Settings and switching our platform to Android.

Before we try to “Build and Run” the project on Android we need to make a couple adjustments to the code. Don’t worry this part is really simple, we only need to comment out a few lines of code, add a simple returnstatement, and replace one file.

Some background: the Tanks!!! Android build contains the Everyplayplugin for screen recording and sharing your game session. Unfortunately Everyplay shutdown in October 2018 and the plugin contains some issues that if not addressed will cause the project to fail to compile and to quit unexpectedly once it compiles.

The first change we need to make is to correct a mistake in the syntax within the Everplay plugin’s build.gradle file. Start by navigating to our project’s Plugins folder and click into the Android folder and then go into the everyplay folder and open the build.gradle file in your favorite code editor.

Now that we have the Gradle file open, select all and replace it with the code below. The team that built Tanks!!! updated the code on GitHub but for some reason it didn’t make its way into the Unity Store plugin.

View the code on Gist.

The last change we need to make is to disable EveryPlay. Why would we want to disable EveryPlay, you may ask. That’s because when the plugin tries to initialize itself it causes the Android app to crash. The fastest way I found was to update a couple lines within the EveryPlaySettings.cs, (Assets → Plugins → EveryPlay → Scripts) so that whenever EveryPlay attempts to check if it’s supported or enabled, we return false.

View the code on Gist.

(Assets → Plugins → EveryPlay → Scripts → EveryPlaySettings.cs)

Now we are finally ready to build the project for Android! Within Unity open the Build Settings (File > Build Settings), select Android from the Platform list and click Switch Platform. Once Unity finishes its setup process, open the Player Settings. We need to make sure our Android app also has a unique Package Name, I chose com.agora.tanks.videodemo.

You may also need to create a key store for the Android app. See this section of the PlayerSettings in Unity Editor:

Android KeyStore setting

Integrating Video Chat

For this project Agora.io Video SDK for Unity was chosen, because it makes implementation into our cross-platform mobile project, really simple.

Let’s open up the Unity Store and search for “Agora Video SDK”.

You only download the asset once, and then you can import it to different projects.

Once the plugin page has loaded, go ahead and click Download. Once the download is complete, click and Import the assets into your project.

Uncheck the last four items before import

You should then open the Lobby as your main scene. The following shows the also how the service page would look like for the multiplayer settings:

Discussion: in the following sections we will go through how the project to be updated with new code and prefabs changes. For those just want to quickly try out everything. Here is a plugin file to import all the changes. You will just need to enter the AppId to the GameSettings object as described after importing.

Modify the Tank Prefab

Let add a plane on to the top of the tank to render the video display. Find the CompleteTank prefab from the project. Add a 3D object Plane to the prefab. Make sure the follow values are updated for the best result:

  • Y =8 for position; Scale to 0.7. Rotate -45 degrees on X, 45 degrees on Y.
  • Do not cast shadow
  • Disable the Mesh Collider script

Plane Prefab values

Attach VideoSurface.cs script from the Agora SDK to the Plane game object.

Save the change, and test the prefab in the game by going to Training to see the outcome. You should see a tank similar to the following screen:

Tank with Plane attached

Create UI for Mic/Camera Controls

Next, open the GameManager prefab and create a container game object and add three toggles under it:

  • Mic On
  • Cam On
  • FrontCam

GameManager UI Canvas

That’s basically all the UI changes we need for this project. The controller script will be added to the prefab later in the following sections.

Controller Scripts

Next we will go over some scripts to make the video chat to work for this game. Before adding new scripts, we will modify a script to allow the input for the Agora AppId.

GameSettings

Two updates to the scripts to make the game to work with Agora SDK.

1.) Add a SerializedField here for the AppId.

Go to your Agora developer’s account and get the AppId (you may need to following the instruction to create the project first):

Agora AppId

In the Lobby scene of the Unity Editor, paste the value of the App ID there and save:

Set the App ID for Agora API

2. Add support for Android devices by asking for Microphone and Camera permissions.

View the code on Gist.

Agora Controller Scripts

Before jump right into the code, let’s understand what capabilities are needed. They are:

  • Interface to the Agora Video SDK to do join channel, show video, mute microphone, flip camera, etc.
  • Actual implementation for the Agora SDK event callbacks.
  • Mapping from the Unity Multiplayer’s id to Agora user’s id.
  • A manager to respond to the UI Toggle actions that we created earlier.

The following script capture shows the corresponding scripts hierarchy. A discussion about the four classes follows.

Controller Script Hierarchy

AgoraApiHandlerImpl: this class implements most of the Agora video SDK event callbacks. Many of them are placeholders. To support the minimum capability in this game, the following handlers are of the most interest:

  • JoinChannelSuccessHandler — when local user joins a channel, this will corrospond to create a game and start a server. The server name is the same as the channel number for Agora SDK.
  • UserJoinedHandler — when a remote user joins the game.
  • UserOfflineHandler — when a remote user leaves the game.

SDKWarningHandler is commented out to reduce the noise in the debugging log. But it is recommended to enable it for an actual project.

View the code on Gist.

AgoraApiHandlersImpl.cs

AgoraVideoController: this singleton class is the main entry point for the Tanks project to interact with the Agora SDK. It will create the AgoraApiHandlerImpl instance and handles interface call to join channel, mute functions, etc. The code also checks for Camera and Microphone access permission for Android devices.

View the code on Gist.

AgoraPlayerController.cs

AgoraPlayerController: while Unity Unet library maintains the Network player’s profile, the Agora user’s id is created asynchronously. We will maintain a list of network player and a list of Agora user ids. When the game scene actually starts, we will bind the two list together into a dictionary so the Agora id can be looked up by using a Networkplayer’s profile. (We don’t need this binding mechanism if user id is known. In an actual production project, it is recommended to let the game server to provide the user ids to set for JoinChannel() call.)

View the code on Gist.

AgoraVideoController.cs

AgoraUIManager: position the container game object to top right location of the game screen. It provides three toggling functions:

  1. Mic On : mute the audio input.
  2. Cam On: mute the local camera streaming and turn off the display of the local player.
  3. CamSwitch: switch between the front camera or the back camera on the mobile device.

View the code on Gist.

AgoraUIManager.cs

Tanks Code Modifications

We will interact the above controllers code into the existing project by updating the Tanks code in the following classes:

TankManager

1.) Add a field to bring in the VideoSurface instance that we added to the Plane and drag the Plane game object from the children to the field.

2.) Add a constant to name the video-surface.

public const string LocalTankVideoName = "Video-Local";

3.) Change code near the end of the initialize() method, where it looked like this before:

old code

The new code:

View the code on Gist.

Discussion: here is the code to associate the plane display that we created earlier to render the video feed. The VideoSurface script handles this work. The only thing it needs is the Agora Id. If it is the local player, the Agora Id will be default to 0, and the SDK will automatically render the device’s camera video onto the hosting plane. If this is a remote player, then the non-zero Agora Id is required to get the stream to render.

Calls JoinChannel()

The JoinChannel() function calls in AgoraVideoController class establish the local player status and starts a channel server. There are three places to initiate the call.

1.) CreateGame.cs: add a line to the StartMatchmakingGame() function inside the callback. It will look like this:

private void StartMatchmakingGame()
{
GameSettings settings = GameSettings.s_Instance;
settings.SetMapIndex(m_MapSelect.currentIndex);
settings.SetModeIndex(m_ModeSelect.currentIndex);

m_MenuUi.ShowConnectingModal(false);

Debug.Log(GetGameName());
m_NetManager.StartMatchmakingGame(GetGameName(), (success, matchInfo) =>
{
if (!success)
{
m_MenuUi.ShowInfoPopup("Failed to create game.", null);
}
else
{
m_MenuUi.HideInfoPopup();
m_MenuUi.ShowLobbyPanel();
AgoraVideoController.instance.JoinChannel(m_MatchNameInput.text);
}
});
}

2. LevelSelect.cs: add the call in OnStartClick(). And the function will look like this:

public void OnStartClick()
{
SinglePlayerMapDetails details = m_MapList[m_CurrentIndex];
if (details.medalCountRequired > m_TotalMedalCount)
{
return;
}

GameSettings settings = GameSettings.s_Instance;
settings.SetupSinglePlayer(m_CurrentIndex, new ModeDetails(details.name, details.description, details.rulesProcessor));

m_NetManager.ProgressToGameScene();
AgoraVideoController.instance.JoinChannel(details.name);
}

3. LobbyServerEntry.cs: add the call in JoinMatch(). Modify the function signature to add string channelName. And the function will look like this:

private void JoinMatch(NetworkID networkId, string channelName)
{
MainMenuUI menuUi = MainMenuUI.s_Instance;

menuUi.ShowConnectingModal(true);

m_NetManager.JoinMatchmakingGame(networkId, (success, matchInfo) =>
{
//Failure flow
if (!success)
{
menuUi.ShowInfoPopup("Failed to join game.", null);
}
//Success flow
else
{
menuUi.HideInfoPopup();
menuUi.ShowInfoPopup("Entering lobby...");
m_NetManager.gameModeUpdated += menuUi.ShowLobbyPanelForConnection;

AgoraVideoController.instance.JoinChannel(channelName);
}
});
}

Update Populate() in the same file to reflect the signature change:

public void Populate(MatchInfoSnapshot match, Color c)
{
string[] split = match.name.Split(new char[1] { '|' }, StringSplitOptions.RemoveEmptyEntries);
string channel_name = split[1].Replace(" ", string.Empty);
m_ServerInfoText.text = channel_name
;

m_ModeText.text = split[0];

m_SlotInfo.text = string.Format("{0}/{1}", match.currentSize, match.maxSize);

NetworkID networkId = match.networkId;

m_JoinButton.onClick.RemoveAllListeners();
m_JoinButton.onClick.AddListener(() => JoinMatch(networkId, channel_name));

m_JoinButton.interactable = match.currentSize < match.maxSize;

NetworkManager.cs

Insert code for player leaving the channel in Disconnect():

public void Disconnect()
{
switch (gameType)
{
case NetworkGameType.Direct:
StopDirectMultiplayerGame();
break;
case NetworkGameType.Matchmaking:
StopMatchmakingGame();
break;
case NetworkGameType.Singleplayer:
StopSingleplayerGame();
break;
}
AgoraVideoController.instance.LeaveChannel();
}

That’s basically all code changes we need to get the video streaming on local and remote player working! But wait, there is a catch we missed. The plane changes rotation with the tank when moving! See one of the tilted position:

Tank Moved with Plane

We will need another script to fix the rotation:

View the code on Gist.

Build the project deploy the game to iOS or Android devices and start playing to a friend! You may see other person’s face (and yours) on top of the tanks and you can yell to each other now!

So, we are done building a fun project!

Bear vs Duck in a Tank Battle!

The current, complete code is hosted on Github.

The post Adding Video Communication to A Multiplayer Mobile Unity Game appeared first on Agora.io.

Introducing the Newly Revamped Agora.io Console

$
0
0

Over the past few months, we’ve been steadily working on enhancing user experience with the Agora platform. We’re now thrilled to share these exciting new developments via a revamped Agora Console, having added many new features and benefits to better support a robust platform that powers real-time engagement.

Let’s dive into the details and explore some of the critical improvements that will help you more easily manage your Agora projects, and get the most out of your personalized Agora Console.

Enhanced User Onboarding Experience

Starting up a project just got easier. As a new user, you’re now immediately greeted with a welcome screen to spin up your first web project and experience our call quality directly on your web browser.

Name your first project and initialize the Agora Engine

This process launches your initial introduction to the Agora Console. The guided QuickStart helps highlight all the key elements of setting up an Agora Video call. Once you complete the onboarding, you will be brought to the dashboard landing page, that centralizes all the most important tools into one hub.

Initiate your first onboarding video project

The Agora Console also has an API that allows developers to use RESTful requests to ban users, check usage, and inquire about online statistics on the server. You can get the Customer ID and Customer Certificate for the RESTful API in Agora Console.

Real-Time Console and Overview Access

The redesigned Console gives an instant yet simplified summary view, including overviews of ongoing projects, account info, sample codes, and usage analytics. Instead of showing all the statistics on one page, now you get a snippet within each section.

Snapshot of the new Agora Console

From the Console, you can access any section to retrieve more detailed information (you can also access those areas from the individual navigation tabs). Here’s a rundown of what you can expect to see here.

Project Usage Monitoring and Accounts: Here’s where you can start a new project, view ongoing projects, and review quick usage data.

Balance View and Analytics: This section gives you a quick glance at your account balance, transaction history, and message view.

Quick Links: This provides you with product overviews, a quick start guide, sample codes, and API references, giving easier access to relevant app development tools.

Let’s take a closer look at the navigation tabs and how we’ve organized the sections.

Revamped Anchor Navigation

To offer actionable insights and help you control your usage, we added a host of new features to the various page views in the navigation bar.

In the Product and Usage tab, you receive an immediate view of your voice, video HD, or Video HD+ (streaming) usage. You can also access your hourly, daily, week, or monthly usage. The audio, video call, and streaming services can be viewed by default, while other products such as the Cloud Recording and the RTMP Converter must be self-enabled before they can be used.

Sample view of the Product and Usage tab

Next, is the Agora Analytics tab. Use this to track the quality of your calls and identify issues, find root causes, and troubleshoot solutions to improve the final user experience.

Quality of Experience (QoE) Overview

The Billing Center covers all your billing transactions. You can now attach a credit card to your account for easy payments, view your transactions and account balance, and make withdrawal requests via either a credit card or bank transfer.

Billing and Payment Center view

Next, we have the Team Member Management tab. There are 5 pre-defined roles in this section, with specified permissions which can be defined by the administrative user who holds full access rights. The other four roles — Finance, Product/Operations, CS/Maintenance and Engineering — have access to specific areas of the dashboard, based on their user role’s predefined permissions.

Member Management Section

The last in this section is the Project Management tab. This provides a deeper view of all projects you’ve created, including your first onboarding demo. Within this view, users can create up to 10 projects, view your App ID, and generate a token which serves as a ‘unique’ password for each project.

Project Management view

Everything throughout the new Console is geared toward getting your projects going faster, smoother, and with the high-impact results, you need to enhance user experience. We’re constantly adding new features and functions to the platform console, so be sure to keep a lookout for the latest updates.

Better yet, get involved in the developer community and let us know what would be most beneficial for your future rollouts. Your voice matters, so check out the Console and let’s hear your thoughts!

In the meantime, if you need help or have any questions about Agora’s products, here’s where you can submit a ticket and we’ll be glad to help.

The post Introducing the Newly Revamped Agora.io Console appeared first on Agora.io.

How to Broadcast Your Screen with Unity3D and Agora.io

$
0
0

Intro

Having the ability to share what you are viewing in your Unity3D experience can be a major advantage. In this tutorial, I will make this functionality easy for you by providing a drag and drop solution to live broadcast your screen and a brief overview of the code behind it. This will allow you to simply drag & drop a Prefab into your Unity Scene and display anything that is seen on the user’s screen over the Agora global Real-Time Communication network.

Prerequisites

  • Unity Editor
  • 2 devices to test on (one to broadcast, one to view), pretty much any device with Windows, Mac, Android, or iOS operating systems will work
  • developer account with Agora.io

Getting Started

To start, we will need to integrate the Agora Video SDK for Unity3D into our project by searching for it in the Unity Asset store or click this link to be taken to it.

When you download and import the SDK into your project you will also receive README.md files for all of the different platforms the SDK supports. For your convenience, I will post the quick start tutorials for each platform below.

It is not imperative that you have built a sample app to use screen share but having created an agora sample app in the past, can help with understanding and debugging.

Create the Screen Share Prefab

If you are simply wanting to broadcast whatever is on the device display, you can do it quickly and easily by creating a Prefab that you can simply drag and drop into any scene in your Unity experience.

Create a Reference in the Scene

First, go into your Hierarchy, right-click and create a new 3dObject/Cube and name it ScreenCapturePrefab. We are using a cube for debugging purposes but you can also use an empty Gameobject if you don’t want a visual representation on screen. Then we can create a new Material, I will color and name mine Green, and apply it to the Cube by dragging and dropping it on the cube in the Scene.

Create the Script

Next, we will click the Add Component Button in the ScreenCapture Prefab and type ShareScreen and hit enter twice to create a script called ShareScreen.cs. We can then double click the script in the inspector to open it in your text editor, I use Visual Studio.

Let’s Review the Code

Now that we have the script open in our text editor, copy and paste or transcribe the code below over the existing default code in the ScreenShare.cs script.

As you can see below, the code is made up of a Start() function in which we create an engine with an appID that we will declare in the Editor. We then set up the configuration of the log filter, callback parameters, initialize the video and enable the video observer. Next, we join the channel which we will set in the Editor later and create a rectangle the width and height of the screen and a Texture2D the width and height of the rectangle we just created.

Next up we have an Update() function where we will call the screenShare() function. We will run this as a coroutine so that we can take advantage of Unity’s built-in WaitForEndOfFrame() function later.

Last we add the IEnumerator screenShare() where we will first call WaitForEndOfFrame() to make sure that everything on the screen is done rendering. We then have the texture read the pixels inside of the rectangle we created earlier and then apply them to the texture. We then get the raw texture data from what we just applied and set it to an array of bytes. We then check if there is an engine present. If the engine we created is present we are able to create an external video frame. We then set the configuration of the video frame and lastly push the external video frame.

View the code on Gist.

Now that you have added the code to your ShareScreen.cs script, go ahead and hit save and go back into your Unity Editor.

Turn it into a Prefab

Now we can go ahead and turn the ScreenCapturePrefab into a Prefab by creating a Prefabs folder inside our Assets folder and dragging and dropping the ScreenCapturePrefab into the Prefab folder. Or simply dragging the ScreenCapturePrefab from the Hierarchy into the Assets folder, will automatically turn this into a Prefab in most modern Unity Editors.

Review the Prefab

In your Unit Editor when you look at the ScreenSharePrefab you will now see new variables have opened up in your ShareScreen Script Component.

If you haven’t already, log in and get an AppID (it’s free). This will give you connectivity to the global Agora Real-time Communication network and allow you to broadcast across your living room or around the world, also your first 10,000 minutes are free every month.

Go ahead and add your App ID and hit save.

Now in whatever scene you drag and drop, instantiate, or activate your new ScreenShare Prefab in, the screen view will be broadcast to whatever channel you choose.

**Before running, to avoid colliding files, you will need to deactivate the files inside of either Assets/Plugins/x86 or Assets/Plugins/x86_64 folder by opening it and highlighting the files inside, unchecking them and then hitting the Apply button.

Let’s test

Now you can build this scene out to a device and it will stream automatically. I have set mine to Activate and Deactivate on a button press, but with it will also work just sitting in a scene.

To view our shared screen, we can build a sample agora scene on a different device with matching App IDs and Channel name. Here are the Quickstart Guides again in case you missed them.

Windows
Mac
Android
iOS

Now when you go to the channel you selected for ScreenShare in the Editor (default channel: agora), you will see the screen view from the first device, Hurray!! Pat yourself on the back, you did it!

All Done!

Thank you for following along. If you have any questions please feel free to join our Slack channel agoraiodev.slack/unity-help-me.

Other Resources

The post How to Broadcast Your Screen with Unity3D and Agora.io appeared first on Agora.io.

Technical Track: Past, Present, and Future of Speech Recognition

$
0
0

On the technical track at All Things RTC, Scott Stephenson, CEO at DeepGram gave an in-depth presentation about next-gen speech recognition, specifically providing an overview of the past, present, and future of automatic speech recognition. In other words, machines learning to do it, not humans (since we already do this all the time).

As a company, DeepGram serves enterprise clients with customized speech recognition based on deep learning, transcription, multiple channel analysis, on a massive scale. Their focus is on real-time speech recognition in a corporate environment, such as with meetings or business phone calls, often with lower fidelity audio.

Speech: Where It All Begins

As Scott pointed out, speech is the most natural form of communication. The main problem in understanding it is figuring out the signal vs. the background interference. The way that machines have been able to achieve this has evolved greatly over the past few decades.

The Past of Speech Recognition

Back in the 1980s, certain acoustic models were established, with audio input being used to compute word options based on previously established vocabulary and speech patterns. The main problem with this was the extensive word matrices involved that made the process computationally intractable far too quickly. There were, simply put, too many options that required too much processing for the technology available at the time.

Over the years, into the 1990s-2000s, Dragon Voice became known as a promising platform that involved intensive training for individual users. Yet it could never replace typing as a form of traditional input and couldn’t be scaled across multiple users or customer bases. It remained limited in scope and performance.

The Present of Speech Recognition

Speech learning algorithms continued to evolve until we reached current day, with smart homes full of devices that respond to wake words, such as Google or Alexa. While a great leap forward in performance and functionality, these devices still are based on limited vocabulary and lack the general ability to decipher context. They’re helpful in simple use cases, but require hard-coded performance parameters and aren’t able to adapt to different users or situations well or filter out background noise.

The Future of Speech Recognition

Scott predicted that the future of speech recognition will be fueled by better hardware and machine learning organization, resulting in more powerful deep learning models that can be highly customized to the user (or company).

These deep neural networks, such as those employed by DeepGram, will involve active training and learning, with models understanding jargon and getting better at identifying disparate audio sources. It will essentially become “facial recognition” for words.

This training also involves input being shown to humans who then verify the output of the audio recognition system. Eventually, these systems will not only be able to be used for things like call transcriptions, but also for speaker identification, as well as labeling emotional context and other more ephemeral outputs.

To view the full video, go here!

The post Technical Track: Past, Present, and Future of Speech Recognition appeared first on Agora.io.

Part 2: Add User Management to your Agora Video Demo in iOS

$
0
0

This is a followup tutorial to my intro on how to build a video chatting app. You’ll need the completed project from that tutorial to follow this one. I recommend going through the tutorial yourself, but you can also download the starter project from Github.

Setting Up Firebase

Go to https://console.firebase.google.com and create a new Firebase project. Follow the instructions there to set up Firebase within your existing app. We’re going to be using Firebase for authentication and analytics, so make sure you add the following pods to your Podfile:

pod 'Firebase/Analytics' 
pod 'Firebase/Auth' 
pod 'FirebaseUI'

Once you’ve finished going through Firebase’s setup, you should have completed the following steps:

  1. Register your app’s Bundle ID with Firebase. (As a reminder, you can find your Bundle ID in your project settings, under General)
  2. Download the GoogleService-Info.plist file and add it to your app.
  3. Add the Firebase pods above to your Podfile, and run pod install. (Make sure to close and reopen your .xcworkspace file afterwards)
  4. Import Firebase to your AppDelegate, and call FirebaseApp.configure() in didFinishLaunchingWithOptions.
  5. Run your application and have Firebase verify communication.

You’ll then be presented with the Firebase dashboard. Go to the Develop pane, where you’ll find the Authentication section.

Click on the “Set up sign-in method” button to move to the sign-in method pane. Enable the Email/Password and Google sign-in options. You’ll need to set your public-facing app name and support email to do so.

In Xcode, you’ll need to set up a URL scheme to handle Google sign-in. Copy the REVERSED_CLIENT_ID field from your GoogleService-Info.plist, and open up the URL Types pane in the Info section of your project settings:

Add a new URL type and paste the reversed client ID into the URL Schemes field. We’ll also need to write some code so our app knows how to handle that URL. We’ll be using Firebase UI, so for us it’s as simple as just telling Firebase to handle it. Add the following to your AppDelegate.swift:

View the code on Gist.

There are plenty of other sign-in options that you may want to allow, but we won’t be covering them here. If you have questions about one in particular, drop a line in the comments so I can cover it in the future.

New Views

With the addition of users being able to log in, we’re going to need a few new screens to cover the new functionality. We need a view for searching for other users to call, and a settings page where users can change their display name and log out. We’ll set aside our video view controller for now, but we’ll connect it up again later.

Add a Navigation Controller to your Main.storyboard and set it as the initial view controller. It will come with an attached TableViewController as the root view controller – get rid of that and replace it with a standard View Controller, and add a Search Bar and a Table View to it. Create a prototype cell with a label for the user’s display name and email, and give it the reuse identifier userCell.

Add a bar button item to the navigation bar so the user can access their settings. Add a new view controller for the settings page, and give it a text field for the user’s display name, and a button to save their changes and to log out. Make sure to set the action of the Settings button to show the Settings page.

Finally, create two new custom UIViewController subclasses and a custom UITableViewCell and hook everything up:

View the code on Gist.

Make sure you set the UserSearchViewController as the tableView’s delegate and dataSource.

View the code on Gist.

View the code on Gist.

Logging in with FirebaseUI

In this tutorial, we’ll be using Firebase’s built-in UI to handle sign-in for us. If you already have a login page, or simply want to be more flexible with your UI, you can find the docs for logging in programmatically with email and Google here and here, respectively.

We’re going to be using FirebaseUI to log in to our app. We’ll have our initial entry screen — our UserSearchViewController— handle showing the default FUIAuth View Controller. All we need to do is tell it what providers we want to allow, and who to tell when the user successfully logs in:

View the code on Gist.

But there’s a problem. When do we want to show this page? We could show it on startup, but it would get pretty annoying to have to log in every time we open the app. To solve this, we can use something provided by FirebaseAuth — an AuthStateDidChangeListener. It will tell us whenever the user’s authentication state changes, and allow us to show the login page only if there’s no user already logged in. Adding one is pretty simple:

View the code on Gist.

And there we have it! A functional login page that will appear if the current user is nil. There’s still more Firebase can do for us, though. It’s time to make a user database that we can use to allow our users to call each other.

Creating a User Database

Firebase will track our users for us — you can see this for yourself on the Authentication tab of the Firebase dashboard, after you sign in to your app with a new account. However, this list of users isn’t very useful to us. While we can get information from it about the currently logged-in user, it won’t allow us to get any info about other users. We’ll need our own database for that.

Go to the Database tab on the Firebase dashboard, and create a new Realtime Database. Start it in test mode for now, so we can easily modify it without having to worry about security while we’re working on it. We could add data manually here, but it’ll be easier to do it automatically in code.

Adding Users on Login

Head back to our FUIAuthDelegate extension. We’re going to make use of that didSignInWith callback to add a user to our database whenever they log in:

View the code on Gist.

This code gets a reference to our main database, and adds an entry in a new “users” node. Each child of a node needs to have a unique key, so we use the unique UID Firebase gives us, and we store the user’s email, their display name, and a lowercased version of their display name that will make it easier to search for it later.

Note that this code will overwrite our user node every time the user logs in. If you want to add additional fields to our user database, this code will need to be adjusted so it doesn’t delete things.

Searching for Users

With this new code in our app, log in with a few accounts, and you’ll see new entries appear in the Database tab of the Firebase dashboard. The next step is to allow our users to search for other users. Let’s make that search bar a bit more functional.

First, we need to get a reference to our users table.

View the code on Gist.

Then, we can query that database when the user enters text into the search bar, and store the results in an array.

View the code on Gist.

This code creates a query that searches for a user whose username or email exactly matches the text the user entered. You can use .queryEnding(atValue: text + "\u{f8ff}") to instead search for all entries that match the prefix entered – e.g. searching for “Jo” would return users named “Jon,” “John,” or “Jonathan.”

Now that we have results, it might be helpful to actually have our table display them:

View the code on Gist.

If you run your app and search for another user, they will now appear in your list! Very cool. However, you may also notice Firebase complaining at you in the console:

[Firebase/Database][I-RDB034028] Using an unspecified index. Your data will be downloaded and filtered on the client. Consider adding “.indexOn”: “username” at /users to your security rules for better performance

This is Firebase telling us that it’s not indexing our users by our search fields, because we haven’t told it to. With as few users as we have now, it’s not a big deal, but if we ever want to release to a large userbase, we should fix this. Fortunately, adding the rule is easy. Head to the Database tab in your Firebase dashboard, and open up your Rules. Add the .indexOn field to your users database and hit Publish:

Add Settings

Let’s fill out our Settings View Controller next. First, let’s hook up our log out button. Like in the last screen, we’ll use an AuthStateDidChangeListener to keep track of changes to our auth state. That way, whenever the user object goes away for any reason, we can jump back to the root view controller (which will then show our login page). It also allows us to populate our name field with the user’s current display name.

View the code on Gist.

Then, the log out function becomes extremely simple:

View the code on Gist.

To allow the user to change their display name, we need to create a UserProfileChangeRequest, which will update their data in Firebase Auth. We will also need to update the values in our user database. We do all this work in didTapSave():

View the code on Gist.

Video Calls, Once More

It’s finally time to hook up our video call screen again. We’re going to use a manual segue to do this, so we can pass in some useful data about the user we want to call. To create a manual segue, Control+Click on the User Search View Controller and drag to the Video View Controller, and select ‘Show’.

Make sure to give it an identifier — “startCall” will work.

We’re going to use tableView’s didSelectRowAt and prepareForSegue to make sure our video view controller gets initialized correctly.

View the code on Gist.

And over in our Video controller:

View the code on Gist.

Our prepareForSegue creates a channel that combines the UIDs of the current user and the user that was searched for, alphabetized to make sure both parties end up in the same room. We’ve lost the ability to create group calls, but don’t worry, we’ll be adding that back in… next time. There’s a lot more features to be added to this app, but for now we’ll be stopping here, since we’ve already covered quite a lot.

We now have an app that users can log into, set their display names, search for other users within the app, and then join calls with those users. That’s quite an accomplishment for this week. If you want to expand your interaction with Firebase, their guides can be found here and their docs here.

Happy coding!

The post Part 2: Add User Management to your Agora Video Demo in iOS appeared first on Agora.io.

How RTC is Opening Doors to More Human Connections Across the Globe

$
0
0

This presentation was run by Mish Matheus, a social and digital media consultant, who handled questions and discussions with Carrie Wells, the Founder & CEO at One Touch Brands, Carter Williams, CEO at Lightmob, and Tina Kuan, CMA at Castbox. The topic revolved around RTC and video/audio industry changes that are impacting people on a very personal level.

The speakers noted that communication and user engagement has gone increasingly mobile, having surpassed desktop, and more mobile apps are being created with richer features that are widely used. There’s a constant rise in voice and video apps across all sectors, and this is making many more touchpoints throughout our daily lives where real-time voice and video interactions are more common.

Matheus asked how each company on the panel came into being, and what need it was meeting. For Castbox, it was providing a platform for podcast enthusiasts and tapping into the growing audio industry. For One Touch Brands, it was figuring out how to use two-way video for companies and brands who wanted to have digital customer experiences. And for Lightmob, it was being an online photography workshop platform that lets photography enthusiasts and pros learn how to use smartphones and digital cameras better directly from one another.

Bridging a Generational Gap

One big issue discussed was how relationships and connections are fundamental to the human experience, and that RTC is changing the way these function. When asked how the online landscape has impacted our human connections, the panelists brought up several points.

First, there’s a big generational shift, with younger technology users being far more comfortable with mobile RTC than older. More video and audio is being used than ever before, bringing a truly personal, face-to-face experience to the mobile platforms. And RTC is also allowing for more immediate interaction and satisfaction via digital interfaces that automate the whole experience.

Getting Relational Validation

People want instant gratification and validation in our interactions, and RTC allows this to be offered via different social channels and streaming platforms. It has hanged how we get our news—but there’s also the threat of ongoing digital isolation, where meaningful, substantive relationships go overlooked in preference for a surface-level digital community.

Carter made the point that every company that provides a mobile experience must take responsibility for the interactivity and consequences, especially now that RTC can turn every piece of mobile life into a transaction.

Biggest Opportunities of RTC

Finally, the panelists pointed out the main opportunities that RTC offers companies and people, which included delivering better content and higher interactivity. The education space and online learning is being revolutionized through virtual universities and online adult education. Health and wellness is booming in digital platforms, and gaming has become its own competitive social platform ecosystem, with services like Discord and Twitch letting players share in-the-moment experiences.

But beyond mere entertainment and engagement, there’s a huge opportunity for improved healthcare and mental health providers with video peer-to-peer support groups and counseling. Nonprofits can make direct connections that emphasize the human experience and bring about positive change in people’s lives, offering helpful resources and reaching isolated people or communities.

Want to see the whole panel? View the video here!

The post How RTC is Opening Doors to More Human Connections Across the Globe appeared first on Agora.io.


How to Build a React Native Video Calling App Using Agora.io

$
0
0

Adding video streaming functionality within a React Native application from scratch can be a daunting task that some might think impossible. Maintaining low-latency, load balancing, and managing user event states can be incredibly tedious. On top of that, people have to maintain cross-platform compatibility.

Well, there’s an easy way to solve all these issues. In this article, I will guide you to build a React Native video calling app by utilizing the magic of Agora’s Video SDK. We’ll go over the structure, setup, and execution of the app before diving into the logistics. You can get a cross-platform video call app going in few simple steps within a matter of minutes.

We’ll be using the Agora RTC SDK for React Native for the example below.

Creating an account with Agora

Sign up on https://dashboard.agora.io/en/signup and login to the dashboard. Navigate to the project list tab under projects and create a new project by clicking the green button as shown below.
Create a new project and retrieve the App ID. This will be used to authorize your requests while developing the application.

Structure of our example

This would be the structure of the application that we are looking at.
index.js
.
├── android
├── components
│ ├── Home.js
│ └── permission.js
│ └── Router.js
│ └── Video.js
├── ios
├── index.js
.

Let’s run the app

You’ll need to have the latest version of Node.js and NPM/Yarn installed.
  • Make sure you’ve registered an Agora account, setup up a project and generated an App ID.
  • Download and extract the zip file from master branch.
  • Run npm install or use yarn to install the app dependencies in the unzipped directory.
  • Navigate to ./components/Home.js and edit line 13 to enter your App ID that we generated as AppID: 'YourAppIDGoesHere'
  • Open a terminal and execute react-native link react-native-agora and react-native link react-native-vector-icons. This links the necessary files from your native modules.
  • Connect your device and run react-native run-android / react-native run-ios to start the app. Give it a few minutes to do its magic.
  • Once you see the home screen on your mobile (or emulator) enter a channel name and hit submit on the device.
  • Use the same channel name on a different device.
  • That’s it. You should have a video call going between the two devices.
The app supports up-to 5 people for now, but this can be extended simply by adding more view layouts in ./components/Video.js

Getting to how it works

permission.js

View the code on Gist.

I’m using a basic function and exporting it to use it later on to request for camera and microphone permissions from the OS on android.

Router.js

View the code on Gist.

We’re using react native router flux to navigate between landing page and video-call screen easily. We set up the two scenes as mentioned home and video.

Home.js

View the code on Gist.

We have our required import statements and we set up our class based component Home. We define our App ID and Channel Name as state variables. We call the function from permission.js to get access to camera and mic.

Home component will be our landing screen when our app is launched.

It will have two fields to enter user App ID, Channel Name and a button to submit the data and start the call.

View the code on Gist.

Next, we define a handleSubmit function to get input values and pass them to the Video.js component which is activated using the Router.js component. After that, we define the view and the styles for it. Then, we export our component to use with the router.

Video.js

View the code on Gist.

We write the used import statements and define the Agora object as a native module and set the defaults from it.

We define the class based video component. In the constructor, we set our state variables, peerIds (array of connected peers), and uid (local user’s unique id), appid (agora app id), channelName, vidMute (True to mute local user’s video, False otherwise), and similarly, audMute for audio.

We set up our video stream configuration in const config and initialize the RTC engine, by calling RtcEngine.init(config).

Before we bring together the components, we define functions to execute user events, i.e. when a new user joins the call, we add their uid to the array; when user leaves the call, we remove their uid from the array; if the local users successfully joins the call channel, we start the stream preview.

View the code on Gist.

We define functions to toggle audio and video feeds of the local user and to end the call by leaving the channel.

View the code on Gist.

Next we define the view for different possible number of users; we start with 4 external users on the channel and move down to no users using conditional operator. We call this function inside our render method. We define styles for our internal components and export our component Video to use with the router.

That’s it, that’s how the app works. You can use pretty much the same execution to add multi-user video-calling in your own React Native application using Agora’s RTC SDK.

The post How to Build a React Native Video Calling App Using Agora.io appeared first on Agora.io.

How To: Build an Augmented Reality Remote Assistance App

$
0
0

Have you ever been on the phone with customer support and struggled to describe the issue, or had the support person fail to clearly describe the solution or not understand what/where you should be looking?

Most remote assistance today is done through audio or text based chat. These solutions can be frustrating for users who may have a hard time describing their issues or understanding new concepts and terminology associated with troubleshooting whatever they need help with.

Thankfully technology has reached a point where this issue can be easily solved using Video Chat and Augmented Reality. In this guide, we’ll walk through all the steps you need to build an iOS app that leverages ARKit and video chat to create an interactive experience.

Prerequisites

  • A basic to intermediate understanding of Swift and the iOS SDK
  • Basic understanding of ARKit and Augmented Reality concepts
  • Agora.io Developer Account
  • Cocoa Pods
  • Hardware: a Mac with Xcode and 2 iOS devices
    — iPhone: 6S or newer
    — 
    iPad: 5th Generation or newer

Please Note: While no Swift/iOS knowledge is needed to follow along, certain basic concepts in Swift/ARKit won’t be explained along the way.

Overview

The app we are going to build is meant to be used by two users who are in separate physical locations. One user will input a channel name and CREATE the channel. This will launch a back-facing AR-enabled camera. The second user will input the same channel name as the first user and JOIN the channel.

Once both users are in the channel, the user that created the channel will broadcast their rear camera into the channel. The second user has the ability to draw on their local screen, and have the touch input displayed in augmented reality in the first user’s world.

Let’s take a moment to review all the steps that we’ll be going through:

  1. Download and build starter project
  2. Project structure overview
  3. Add video chat functionality
  4. Capture and normalize touch data
  5. Add data transmission
  6. Display touch data in augmented reality
  7. Add “Undo” functionality

Getting Started with the Starter Project

I have created a starter project for this tutorial that includes the initial UI elements and buttons, including the bare-bones AR and remote user views.

Let’s start by downloading the repo above. Once all the files have finished downloading, open the Terminal window to the project’s directory and run pod install to install all dependencies. Once the dependencies have finished installing, open the AR Remote Support.xcworkspace in Xcode.

Once the project is open in Xcode, let’s build and run the project using the iOS simulator. The project should build and launch without issue.

Add a channel name, then click Join and Create buttons, to preview the UI’s that we will be working with.

Project Structure Overview

Before we start coding, let’s walk through the starter project files to understand how everything is setup. We’ll start with the dependencies, then go over the required files, and lastly we’ll take a look at the custom classes that we’ll be working with.

Within the Podfile, there are two 3rd-party dependencies: Agora.io’s Real-Time Communications SDK, facilitates in building video chat functionality; ARVideoKit’s open-source renderer, facilitates using the rendered AR view as a video source. The reason we need an off-screen renderer is because ARKit obfuscates the rendered view, so we need a framework to handle the task of exposing the rendered pixelbuffer.

As we move into project files, the AppDelegate.swift has the standard set up with one minor update. The ARVideoKit library is imported and there’s an added delegate function for UIInterfaceOrientationMask to return the ARVideoKit’s orientation. Within the info.plist the required permissions for Camera and Microphone access are included. These permissions are required by ARKit, Agora, and ARVideoKit.

Before we jump into the custom ViewControllers, let’s take a look at some of the supporting files/classes that we’ll be using. The GetValueFromFile.swift allows us to store any sensitive API credentials in the keys.plist so we don’t have to hard-code them into the classes. SCNVector3+Extensions.swift contains some extensions to and functions for the SCNVector3 class that make mathematical calculations simpler. The last helper file is ARVideoSource.swift, which contains the proper implementation of the AgoraVideoSourceProtocol, which we’ll use to pass our rendered AR scene as the video source for one of the users in the video chat.

The ViewController.swift is a simple entry point for the app. It allows users to input a Channel Name and then choose whether they want to: CREATE the channel and receive remote assistance; JOIN the channel and provide remote assistance.

The ARSupportBroadcasterViewController.swift handles the functionality for the user who is receiving remote assistance. This ViewController will broadcast the rendered AR scene to the other user, so it implements the ARSCNViewDelegate, ARSessionDelegate, RenderARDelegate, and AgoraRtcEngineDelegate.

The ARSupportAudienceViewController.swift handles the functionality for the user who is providing remote assistance. This ViewController will broadcast the user’s front-facing camera and will allow the user to draw on their screen and have the touch information displayed in the remote user’s augmented reality scene, so it implements the UIGestureRecognizerDelegate, AgoraRtcEngineDelegate.

For simplicity, let’s refer to ARSupportBroadcasterViewController as BroadcasterVC and ARSupportAudienceViewController as AudienceVC.

Adding Video Chat Functionality

We’ll start by adding our AppID into the keys.plist file. Take a moment to log into your Agora Developer Account, copy your App ID and paste the hex into the value for AppID within keys.plist.

View the code on Gist.

An example of the keys.plist file with an Agora AppID

Now that we have our AppID set, we will use it to initialize the Agora Engine within the loadView function for both BroadcasterVC and AudienceVC.

There are slight differences in how we set up the video configurations. In the BroadcasterVC we are using an external video source so we can set up the video configuration and the source within the loadView.

View the code on Gist.

the loadView function within ARSupportBroadcasterViewController

Within the AudienceVC we will init the engine and set the Channel Profile in the loadView, but we will wait to configure video settings within the viewDidLoad.

View the code on Gist.

the loadView function within ARSupportAudienceViewController

Note: We’ll add in the touch gestures functionality later on in this tutorial.

Let’s also set up the video configuration within the AudienceVC. Within the viewDidLoad call the setupLocalVideo function.

override func viewDidLoad() {
    super.viewDidLoad()
  ...
    // Agora implementation
    setupLocalVideo() //  - set video configuration
    //  - join the channel
  ...
}

Add the code below to the setupLocalVideo function.

View the code on Gist.

Next we’ll join the channels from the viewDidLoad. Both ViewControllers use the same function to join the channel. In each BroadcasterVC and AudienceVC call the joinChannel function within the viewDidLoad.

override func viewDidLoad() {
    super.viewDidLoad()
  ...
    joinChannel() // Agora - join the channel
}

Add the code below to the joinChannel function.

View the code on Gist.

The joinChannel function will set the device to use the speakerphone for audio playback, and join the channel set by the ViewController.swift.

Note: This function will attempt to get the token value stored in keys.plist. This line is there in case you would like to use a temporary token from the Agora Console. For simplicity I have chosen to not use token security, so we have not set the value. In this case the function will return nil, and the Agora engine will not use token based security for this channel.

Now that users can join a channel, we should add functionality to leave the channel. Similar to joinChannel, both ViewControllers use the same function to leave the channel. In each BroadcasterVC and AudienceVC add the code below to the leaveChannel function.

View the code on Gist.

The leaveChannel function will get called in popView and viewWillDisapear because we want to make sure we leave the channel whenever the user clicks to exit the view or if they dismiss the app (backgrounded/exit).

The last Video Chat feature we need to implement is the toggleMic function, which gets called anytime the user taps the microphone button. Both BroadcasterVC and AudienceVC use the same function, so add the code below to the toggleMic function.

View the code on Gist.

Handling Touch Gestures

In our app, the AudienceVC will provide remote assistance by using their finger to draw on their screen. Within the AudienceVC we’ll need to capture and handle the user’s touches.

First we’ll want to capture the location whenever the user initially touches the screen. Set that point as the starting point. As the user drag’s their finger across the screen, we’ll want to keep track of all those points, so we’ll use touchPoints array to add each point, so we need to ensure an empty array with every new touch. I prefer to reset the array in the touchesBegan to mitigate against instances where the user adds a second finger to the screen.

View the code on Gist.

Note: This example will only support drawing with a single finger. It is possible to support multi-touch drawing, it would require some more effort to track the uniqueness of the touch event.

To handle the finger movement, let’s use a Pan Gesture. Within this gesture we’ll listen for the gesture to start, change, and end states. Let’s start by registering the Pan Gesture.

View the code on Gist.

Once the Pan Gesture is recognized, we’ll calculate the position of the touch within the view. The GestureRecognizer gives us the touch positions as values relative to the Gesture’s initial touch. This means that the translation from GestureRecognizer at GestureRecognizer.began is (0,0). The self.touchStart will help us to calculate the x,y values relative to the view’s coordinate system.

View the code on Gist.

Once we’ve calculated the pixelTranslation (x,y values relative to the view’s coordinate system), we can use these values to draw the points to the screen and to “normalize” the points relative to the screen’s center point.

I’ll discuss normalizing the touches in a moment, but first let’s go through drawing the touches to the screen. Since we are drawing to the screen we’ll want to use the Main thread. So within a Dispatch block we’ll use the thepixelTranslation to draw the points into the DrawingView. For now don’t worry about removing the points because we’ll handle that we transmit the points.

Before we can transmit the user’s touches we need to normalize the point relative to the screen’s center. UIKit calculates with (0,0) being the upper left hand corner of the view, but within ARKit we’ll need to add the points relative to the ARCamera’s center point. To achieve this we’ll calculate the translationFromCenter using the pixelTranslation and subtracting half of the view’s height and widths.

visualization of differences between UIKit and ARCamera coordinate systems

Transmitting Touches and Colors

To add an interactive layer, we’ll use the DataStream provided as part of the Agora engine. Agora’s video SDK allows for the ability to create a data stream capable of sending up to 30 (1kb) packets per second. Since we will be sending small data messages this will work well for us.

Let’s start by enabling the DataStream within the firstRemoteVideoDecoded. We’ll do this in both BroadcasterVC and AudienceVC.

View the code on Gist.

If the data stream is enabled successfully, self.streamIsEnabled will have a value of 0. We’ll check this value before attempting to send any messages.

Now that the DataStream is enabled, we’ll start with AudienceVC. Let’s review what data we need to send: touch-start, touch-end, the points, and color. Starting with the touch events, we’ll update the PanGesture to send the appropriate messages.

Note: Agora’s Video SDK DataStream uses raw data so we need to convert all messages to Strings and then use the .data attribute to pass the raw data bytes.

View the code on Gist.

ARKit runs at 60 fps so sending the points individually would cause us to hit the 30 packet limit resulting in point data not getting sent. So we’ll add the points to the dataPointsArray and transmit them every 10 points. Each touch-point is about 30–50 bytes, so by transmitting every tenth point we will stay well within the limits of the DataStream.

View the code on Gist.

When sending the touch data, we can also clear the DrawingView. To keep it simple we can get the DrawingView sublayers, loop through and remove them from the SuperLayer.

View the code on Gist.

Lastly, we need to add support for changing the color of the lines. We’ll send the cgColor.components to get the color value as a comma delimited string. We’ll prefix the message with color: so that we don’t confuse it with touch data.

View the code on Gist.

Now that we’re able to send data from the AudienceVC, let’s add the ability for BroadcasterVC to receive and decode the data. We’ll use the rtcEngine delegate’s receiveStreamMessage function, to handle all data that is received from the DataStream.

View the code on Gist.

There are a few different cases that we need to account for, so we’ll use a Switch to check the message and handle it appropriately.

When we receive the message to change the color, we need to isolate the component values, so we need to remove any excess characters from the string. Then we can use the components to initialize the UIColor.

In the next section we’ll go through handling the touch-start and adding the touch points into the ARSCN.

Display Gestures in Augmented Reality

Upon receiving the message that a touch has started, we’ll want to add a new node to the scene and then parent all the touches to this node. We do this to group all the touch points and force them to always rotate to face the ARCamera.

View the code on Gist.

Note: We need to impose the LookAt constraint to ensure the drawn points always face the user. Points will need to be drawn always facing the camera.

When we receive touch-points we’ll need to decode the String into an Array of CGPoints that we can then append to the self.remotePoints array.

View the code on Gist.

Within the session delegate’s didUpdate we’ll check the self.remotePoints array. We’ll pop the first point from the list and render a single point per frame to create the effect that the line is being drawn. We’ll parent the nodes to a single root node that gets created in the upon receipt of the touch-start message.

View the code on Gist.

Add “Undo”

Now that we have the data transmission layer setup, we can quickly keep track of each touch gesture and undo it. We’ll start by sending the undo  essage from the AudienceVC to the BroadcasterVC. We’ll add the code below to the sendUndoMsg function within our AudienceVC.

View the code on Gist.

Send the string “undo” as a message

Within the BroadcasterVC we’ll check for the undo message within the rtcEngine delegate’s receiveStreamMessage function. Since each set of touch points are parented to their own root nodes, with every undo message we’ll remove the last rootNode (in the array) from the scene.

View the code on Gist.

Build and Run

Now we are ready to build and run our app. Plug in your two test devices, build and run the app on each device. On one device enter the channel name and Create the channel, and then on the other device enter the channel name and Join the channel.

Thanks for following and coding along with me, below is a link to the completed project. Feel free to fork and make pull requests with any feature enhancements.

For more information about the Agora.io Video SDK, please refer to the Agora.io API Reference.

The post How To: Build an Augmented Reality Remote Assistance App appeared first on Agora.io.

Video Chat with Unity3D, the ARFoundation Version

$
0
0

Many of you took steps in creating your very own video chat app using our How To: Create a Video Chat App in Unity tutorial. Now that you’ve got that covered, let’s take your app up a couple notches by adding an immersive experience to it. In this Augmented Reality (AR) tutorial, you’ll learn how to communicate and chat with friends in AR. The concept is very similar to the previous tutorial with a few changes to make it work in less than an hour.

Prerequisites

  • Unity Editor (Version 2018 or above)
  • 2 devices to test on (one to broadcast, one to view)
  • Broadcast device will be a mobile device to run AR scene: Apple device of iOS 11 or above; Android device with Android 7 or above
  • Viewing device will run the standard Video Chat demo app – pretty much any device with Windows, Mac, Android, or iOS operating systems will work
  • developer account with Agora.io

Getting Started

To start, we will need to integrate the Agora Video SDK for Unity3D into our project by searching for it in the Unity Asset Store or click this link to begin the download.

Video SDK on Asset Store

After you finish downloading and importing the SDK into your project, you should be able to see the README.md files for the different platforms the SDK supports. For your convenience, you can also access the quick start tutorials for each platform below.

Unity AR Packages

On Unity Editor, open Package Manager from the Window tab. Install the following packages:

For Unity 2018:

  • AR Foundation 1.0.0 — preview.22 (the latest for 1.0.0)
  • ARCore XR Plugin 1.0.0 — preview.24 (the latest for 1.0.0)
  • ARKit XR Plugin 1.0.0-preview.27 (the latest for 1.0.0)

For Unity 2019:

  • AR Foundation 2.0.2
  • ARCore XR Plugin 2.0.2
  • ARKit XR Plugin 2.0.2

Modify the Existing Project

Modify the Play Scene

Open up the TestSceneHelloVideo scene. Take out the Cube and Cylinder. Delete the Main Camera since we will use an AR Camera later.

Test Scene — before

On the Hierarchy panel, create a “AR Session” and “AR Session Origin”. Click on the AR Camera and change the tag to “Main Camera”, then create a Sphere 3D object. Modify the transform position to (0,0,5.67) so it can be visible in your Editor Game view and Save.

Test Scene — after

Unlike the Cube or Cylinder, the purpose of this sphere is just for positional reference. You will find this sphere in your AR view when running on a mobile device. We will need to add video view objects relative to this sphere’s position by code.

Modify Test Scene Script

Open TestHelloUnityVideo.cs, change the onUserJoin() method to generate a cube instead of a plane. We will add a function to provide a new position for each new remote user joining the chat.

View the code on Gist.

Modify User Joined Delegate

In the Join() method, add the following line in the “enable video” section:

mRtcEngine.EnableLocalVideo(false)

This call disables the front camera, so it won’t get into conflict with the back camera which is used by the AR Camera on the device.

Last but not the least, fill in your APP ID for the variable declared in the beginning section of the TestHelloUnityVideo class.

If you haven’t already, go to Agora.io, log in and get an APP ID (it’s free). This will give you connectivity to the global Agora Real-time Communication network and allow you to broadcast across your living room or around the world, also your first 10,000 minutes are free every month.

Build Project

The configuration for building an ARFoundation enabled project is slightly different from the standard demo project.

Here is a quick checklist of things to set:

IOS:

  • Rendering Color Space = Linear
  • Graphics API = Metal
  • Architecture = ARM64
  • Target Minimum iOS Version = 11.0
  • A unique bundle id

Android:

  • Graphics API = GLES3
  • MultiThreaded Rendering = off
  • Minimum API Level = Android 7.0 (API level 24)
  • Create a new key store in the Publishing Settings

**Before running, to avoid colliding libraries, you will need to deactivate the files inside of either Assets/Plugins/x86 or Assets/Plugins/x86_64 folder by opening it and highlighting the files inside, unchecking them and then hitting the Apply button. The following screenshot shows that the x86_64 libraries are chosen to be deactivated.

Deactivating x86_64 library files

Now build the Application to either iOS or Android. Run the standard demo application for the remote users, from any of the four platforms that we discussed at the beginning of this tutorial. To test your demo, stand up and use the device to look around you and you should find the sphere. This indicates that you have successfully created an AR scene. A joining remote user’s video will now be placed on the cubes next to the sphere. That’s it, you can now enjoy chatting with your friends in AR!

Great job! You’ve built a simple AR world of video chatters!

The ARFoundation Demo

All Done!

Thank you for following along. If you have any questions please feel free to leave a comment on our Slack channel agoraiodev.slack/unity-help-me.

Other Resources

The post Video Chat with Unity3D, the ARFoundation Version appeared first on Agora.io.

Number Plate Recognition Using TensorFlow and Agora.io

$
0
0

This blog was written by Shriya Ramakrishnan, an Agora Superstar. The Agora Superstar program empowers developers around the world to share their passion and technical expertise, and create innovative real-time communications apps and projects using Agora’s customizable SDKs. Think you’ve got what it takes to be an Agora Superstar? Apply here: https://www.agora.io/en/superstars-program/


Number Plate recognition has a wide range of applications, from using it to solve crimes, to finding lost cars which get washed away during high-intensity floods.

This read is about a Number Plate Recognition demo system created using TensorFlow and Agora.io. It will give you a quick understanding of the python code used, function by function.

Agora.io is a Real-Time Engagement provider delivering voice, video, and live streaming on a global scale for mobile, native and desktop apps.

We will be using Agora.io’s live interactive video streaming for detecting number plates in real-time.

Let’s dive straight into the code!

Step 1: We need to set up a video call. To do so, sign-up on Agora.io here. After signing up, log in and head to the ‘project management’ tab. Create a new project with a suitable name of your choice. Procure the app-id by copying it onto your clipboard and pasting it somewhere you will be able to access it later while developing the code.

Step 2: Go to my github.com repository here.

Understanding the code:

detect.py

View the code on Gist.

In the above-given code, we are using the functions of the AgoraRTC library from the agora_community_sdk package to connect to the video call from our remote terminal and via the internet using the chromium driver and the Agora app-id you created. Enter your app-id, channel-name, links to the chromium driver executable file, and in.png as directed in the code. The first frame from the live video will be extracted and saved.

View the code on Gist.

This is the function where the model parameters obtained after training the model are used to deduce the probable bounding boxes.

`bbox_tl, bbox_br, letter_probs` define the bounding box top-left and bottom-right corners respectively, and a 7,36 matrix gives the probability distributions of each letter in the entire frame captured.

The image is converted into multiple scales and the model detects the number plates over a sliding window for each scale.

Finally, it predicts the number plate(s) that has a greater than 50% probability of appearing over multiple scales.

View the code on Gist.

The above functions are used for the following two purposes

  1. Finding sets of overlapping rectangles, detected previously over the frame.
  2. Finding the intersection of those sets, along with the code corresponding with the rectangle with the highest presence parameter.

View the code on Gist.

The above function is used to join the probable letters detected on the number plate and return as a string.

View the code on Gist.

This is the main function where links to ‘in.png’ (input frame, the frame where the number plate and the registration number has to be detected), ‘out.png’ (in.png, post-processing with the bounding box around number plate and registration number) and ‘weights.npz’ (weights, post-training the model) are given. The rest of the functions explained above are called and executed in this section.

sample in.png

sample out.png

Step 3: Build the system as mentioned earlier in this blog, download my GitHub repository as a .zip folder  here.

Prerequisites

  1. All python libraries given in requirements.txt
  2. An Agora app-id
  3. Latest version of any text editor that supports python, preferably ‘Sublime Text3’

Build Instructions:

  1. Create an Agora.io account:
    • Sign-up and login here.
    • Navigate to the ‘project management’ tab in the dashboard.
    • Create a new project with a name of your choice.
    • Procure the app-id by copying it onto your clipboard and pasting it somewhere you will be able to access it later while developing the code.
  2. Install all the dependencies given in requirements.txt using:

    Windows:

    pip install -r requirements.txt

    Linux:

    sudo pip install -r requirements.txt
  3. Download a zip file of this repository.
  4. Download chromedriver.exe and weights.npz.
  5. Open detect.py in sublime or any compatible text editor of your choice.
    • Paste the app-id and link to the chromedriver.exe as directed on line 20 of the code and a channel name on line 21.
    • Paste the link to a high resolution(approx 2000 x 1500) image on line 30.
    • Give links to in.png on line 161 and weights.npz on line 164 as directed in the code.
    • Give a link to out.png on line 194.
  6. Go here.
    • Paste app-id and channel name in the respective input boxes on sender and receiver side.
    • Click on join to activate call on sender and receiver side.
  7. Execute detect.py from the master folder on the terminal.
  8. The desired result will be in out.png.

That sums it up!

The post Number Plate Recognition Using TensorFlow and Agora.io appeared first on Agora.io.

How Live Video Streaming is Transforming the Fitness and Health Industries

$
0
0

The fitness industry has been undergoing dramatic growth in recent years as wearables like Fitbit and Apple Watch have grown in popularity and connected hardware companies like Peleton have formed strong devoted followings. But devices and group classes aren’t the only things trending. More people than ever are taking control of all areas of their health, be it physical, mental, or emotional.

In previous posts, we’ve discussed the impact live video has had and will continue to have on telehealth and how Agora has helped clients like Talkspace revolutionize mental health services, but live video broadcasting is capable of more than just connecting licensed professionals with patients. It can bring individuals, small groups, and even thousands of people together.

From personal training sessions to virtual wellness communities, live video is playing a major role in the wellness revolution and ways individuals and businesses alike can create their own fitness and health-related live broadcasting experiences.

Live Fitness Sessions

According to the CDC, only 23 percent of U.S. adults are getting enough exercise. While a person’s geographic location, age, and health status play a role in how little or how much they work out, most people can probably attribute their lack of fitness to one major obstacle: it’s inconvenient. That’s why more and more people are looking for new, exciting ways to fit exercise into their busy schedules.

LiveKick, one of the emerging players in live video fitness, offers members the opportunity to work out with personal fitness coaches via live video up to three times per week. Designed for people with busy lives, LiveKick works with them on their own schedules. Members also receive ongoing guidance and support from their trainer outside of class, so they’re encouraged to keep up with and meet their fitness goals. Other fitness startups like ClassPass and Peleton are also getting in on the live video trend by broadcasting fitness classes to thousands of people in real time. ClassPass has a variety of live workout sessions available on weekdays, while Peleton offers up to 14 “live rides” per day.

 

courtesy: Livekick

Engaged, Like-Minded Communities

One of the pros of live video and voice is that it’s capable of bringing people together without the constraints of a physical location. Hundreds or even thousands of individuals can virtually gather together to share and even create their own communities, like Zubia.

Zubia is a health and wellness live streaming community. Members can choose to host their own broadcasts on everything from diet and nutrition to medicine and healthcare. While plenty of health professionals host their own broadcasts, anyone can join the community, connect with others, and share valuable information. Users can also set reminders for upcoming broadcasts they’d like to join or even watch 300+ previously recorded broadcasts on-demand any time.

Creating a Live Video Experience

While there are a variety of wellness companies and communities taking advantage of live video, many individuals fitness trainers, wellness experts, and businesses don’t know that they can utilize RTC technology to grow their companies. While Instagram Live, Facebook Live, and other social media platforms are ideal for promoting content, they are not service-oriented and can’t be embedded into existing applications.

Agora works with businesses across a variety of industries including social, gaming, and of course, health and fitness. From one-to-one video for personal training sessions to group chat video calling for wellness seminars, enterprises of all sizes can take advantage of Agora’s live voice, video, and interactive broadcasting services. The Agora SDK allows for deep customization, so clients can create their own unique, interactive live video experiences for their customers.
Ready to learn more about how Agora can help your burgeoning health or fitness business? Talk to our team.

The post How Live Video Streaming is Transforming the Fitness and Health Industries appeared first on Agora.io.

Viewing all 197 articles
Browse latest View live