websocket探索其与话音

websocket探索其与语音、图片的力量

2015/12/26 · JavaScript
· 3 评论 ·
websocket

初稿出处:
AlloyTeam   

说到websocket想比我们不会陌生,若是陌生的话也没提到,一句话概括

“WebSocket protocol
是HTML5一种新的协议。它达成了浏览器与服务器全双工通讯”

WebSocket相比较传统那一个服务器推技术简直好了太多,大家得以挥手向comet和长轮询那个技术说拜拜啦,庆幸我们生存在富有HTML5的时代~

那篇文章我们将分三局地探索websocket

先是是websocket的大规模使用,其次是截然自己创造服务器端websocket,最后是必不可缺介绍利用websocket制作的八个demo,传输图片和在线语音聊天室,let’s
go

一、websocket常见用法

这里介绍三种自己以为大规模的websocket已毕……(只顾:本文建立在node上下文环境

1、socket.io

先给demo

JavaScript

var http = require(‘http’); var io = require(‘socket.io’); var server =
http.createServer(function(req, res) { res.writeHeader(200,
{‘content-type’: ‘text/html;charset=”utf-8″‘}); res.end();
}).listen(8888); var socket =.io.listen(server);
socket.sockets.on(‘connection’, function(socket) { socket.emit(‘xxx’,
{options}); socket.on(‘xxx’, function(data) { // do someting }); });

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
var http = require(‘http’);
var io = require(‘socket.io’);
 
var server = http.createServer(function(req, res) {
    res.writeHeader(200, {‘content-type’: ‘text/html;charset="utf-8"’});
    res.end();
}).listen(8888);
 
var socket =.io.listen(server);
 
socket.sockets.on(‘connection’, function(socket) {
    socket.emit(‘xxx’, {options});
 
    socket.on(‘xxx’, function(data) {
        // do someting
    });
});

相信领悟websocket的同窗不容许不领会socket.io,因为socket.io太有名了,也很棒,它自己对逾期、握手等都做了拍卖。我猜度那也是落到实处websocket使用最多的艺术。socket.io最最最精良的一些就是优雅降级,当浏览器不协助websocket时,它会在中间优雅降级为长轮询等,用户和开发者是不须求关注具体贯彻的,很便宜。

只是事情是有两面性的,socket.io因为它的应有尽有也带动了坑的地方,最根本的就是臃肿,它的包裹也给多少推动了较多的报道冗余,而且优雅降级这一亮点,也陪伴浏览器标准化的拓展逐级失去了惊天动地

Chrome Supported in version 4+
Firefox Supported in version 4+
Internet Explorer Supported in version 10+
Opera Supported in version 10+
Safari Supported in version 5+

在此间不是指责说socket.io倒霉,已经被淘汰了,而是有时候大家也可以考虑部分任何的落到实处~

 

2、http模块

凑巧说了socket.io臃肿,那现在就来说说便捷的,首先demo

JavaScript

var http = require(‘http’); var server = http.createServer();
server.on(‘upgrade’, function(req) { console.log(req.headers); });
server.listen(8888);

1
2
3
4
5
6
var http = require(‘http’);
var server = http.createServer();
server.on(‘upgrade’, function(req) {
console.log(req.headers);
});
server.listen(8888);

很粗略的落到实处,其实socket.io内部对websocket也是那样达成的,然则前面帮大家封装了部分handle处理,那里大家也足以团结去丰硕,给出两张socket.io中的源码图

皇家赌场手机版 1

皇家赌场手机版 2

 

3、ws模块

背后有个例子会用到,那里就提一下,前面具体看~

 

二、自己已毕一套server端websocket

恰好说了二种普遍的websocket落成格局,现在我们考虑,对于开发者来说

websocket相对于传统http数据交互方式以来,扩展了服务器推送的风云,客户端接收到事件再进行相应处理,开发起来差别并不是太大呀

那是因为那多少个模块已经帮大家将多少帧解析此地的坑都填好了,第二局地大家将尝试自己成立一套简便的服务器端websocket模块

感谢次碳酸钴的研商支持,自家在此处那部分只是简单说下,即使对此有趣味好奇的请百度【web技术探讨所】

团结完毕服务器端websocket首要有两点,一个是应用net模块接受数据流,还有一个是对待官方的帧结构图解析数据,已毕那两局地就早已到位了百分之百的底层工作

首先给一个客户端发送websocket握手报文的抓包内容

客户端代码很不难

JavaScript

ws = new WebSocket(“ws://127.0.0.1:8888”);

1
ws = new WebSocket("ws://127.0.0.1:8888");

皇家赌场手机版 3

劳务器端要针对性这一个key验证,就是讲key加上一个一定的字符串后做一次sha1运算,将其结果转换为base64送回来

JavaScript

var crypto = require(‘crypto’); var WS =
‘258EAFA5-E914-47DA-95CA-C5AB0DC85B11’;
require(‘net’).createServer(function(o) { var key;
o.on(‘data’,function(e) { if(!key) { // 获取发送过来的KEY key =
e.toString().match(/Sec-WebSocket-Key: (.+)/)[1]; //
连接上WS那个字符串,并做两次sha1运算,最终转换成Base64 key =
crypto.createHash(‘sha1’).update(key+WS).digest(‘base64’); //
输出再次回到给客户端的数量,那几个字段都是必须的 o.write(‘HTTP/1.1 101
Switching Protocols\r\n’); o.write(‘Upgrade: websocket\r\n’);
o.write(‘Connection: Upgrade\r\n’); // 这一个字段带上服务器处理后的KEY
o.write(‘Sec-WebSocket-Accept: ‘+key+’\r\n’); //
输出空行,使HTTP头甘休 o.write(‘\r\n’); } }); }).listen(8888);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
var crypto = require(‘crypto’);
var WS = ‘258EAFA5-E914-47DA-95CA-C5AB0DC85B11’;
 
require(‘net’).createServer(function(o) {
var key;
o.on(‘data’,function(e) {
if(!key) {
// 获取发送过来的KEY
key = e.toString().match(/Sec-WebSocket-Key: (.+)/)[1];
// 连接上WS这个字符串,并做一次sha1运算,最后转换成Base64
key = crypto.createHash(‘sha1’).update(key+WS).digest(‘base64’);
// 输出返回给客户端的数据,这些字段都是必须的
o.write(‘HTTP/1.1 101 Switching Protocols\r\n’);
o.write(‘Upgrade: websocket\r\n’);
o.write(‘Connection: Upgrade\r\n’);
// 这个字段带上服务器处理后的KEY
o.write(‘Sec-WebSocket-Accept: ‘+key+’\r\n’);
// 输出空行,使HTTP头结束
o.write(‘\r\n’);
}
});
}).listen(8888);

如此握手部分就早已到位了,前边就是数据帧解析与转变的活了

先看下官方提供的帧结构示意图

皇家赌场手机版 4

简言之介绍下

FIN为是或不是终止的标记

RSV为预留空间,0

opcode标识数据类型,是否分片,是或不是二进制解析,心跳包等等

交由一张opcode对应图

皇家赌场手机版 5

MASK是还是不是使用掩码

Payload len和后边extend payload length表示数据长度,这些是最费劲的

PayloadLen只有7位,换成无符号整型的话唯有0到127的取值,这么小的数值当然不能描述较大的数码,由此确定当数码长度小于或等于125时候它才作为数据长度的叙说,假若这几个值为126,则时候背后的多个字节来存储数据长度,倘若为127则用前面多少个字节来囤积数据长度

Masking-key掩码

上面贴出解析数据帧的代码

JavaScript

function decodeDataFrame(e) { var i = 0, j,s, frame = { FIN: e[i]
>> 7, Opcode: e[i++] & 15, Mask: e[i] >> 7,
PayloadLength: e[i++] & 0x7F }; if(frame.PayloadLength === 126) {
frame.PayloadLength = (e[i++] << 8) + e[i++]; }
if(frame.PayloadLength === 127) { i += 4; frame.PayloadLength =
(e[i++] << 24) + (e[i++] << 16) + (e[i++] << 8)

  • e[i++]; } if(frame.Mask) { frame.MaskingKey = [皇家赌场手机版,e[i++], e[i++],
    e[i++], e[i++]]; for(j = 0, s = []; j < frame.PayloadLength;
    j++) { s.push(e[i+j] ^ frame.MaskingKey[j%4]); } } else { s =
    e.slice(i, i+frame.PayloadLength); } s = new Buffer(s); if(frame.Opcode
    === 1) { s = s.toString(); } frame.PayloadData = s; return frame; }
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
function decodeDataFrame(e) {
var i = 0,
j,s,
frame = {
FIN: e[i] >> 7,
Opcode: e[i++] & 15,
Mask: e[i] >> 7,
PayloadLength: e[i++] & 0x7F
};
 
if(frame.PayloadLength === 126) {
frame.PayloadLength = (e[i++] << 8) + e[i++];
}
 
if(frame.PayloadLength === 127) {
i += 4;
frame.PayloadLength = (e[i++] << 24) + (e[i++] << 16) + (e[i++] << 8) + e[i++];
}
 
if(frame.Mask) {
frame.MaskingKey = [e[i++], e[i++], e[i++], e[i++]];
 
for(j = 0, s = []; j < frame.PayloadLength; j++) {
s.push(e[i+j] ^ frame.MaskingKey[j%4]);
}
} else {
s = e.slice(i, i+frame.PayloadLength);
}
 
s = new Buffer(s);
 
if(frame.Opcode === 1) {
s = s.toString();
}
 
frame.PayloadData = s;
return frame;
}

下一场是转变数据帧的

JavaScript

function encodeDataFrame(e) { var s = [], o = new
Buffer(e.PayloadData), l = o.length; s.push((e.FIN << 7) +
e.Opcode); if(l < 126) { s.push(l); } else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0,
0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00)
>> 8, l&0xFF); } return Buffer.concat([new Buffer(s), o]); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
function encodeDataFrame(e) {
var s = [],
o = new Buffer(e.PayloadData),
l = o.length;
 
s.push((e.FIN << 7) + e.Opcode);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), o]);
}

都是根据帧结构示意图上的去处理,在那里不细讲,小说主要在下局部,若是对那块感兴趣的话可以移动web技术探究所~

 

三、websocket传输图片和websocket语音聊天室

正片环节到了,那篇文章最要紧的或者显得一下websocket的有的施用情况

websocket探索其与话音。1、传输图片

咱俩先思考传输图片的步骤是怎么,首先服务器收到到客户端请求,然后读取图片文件,将二进制数据转载给客户端,客户端怎么样处理?当然是使用FileReader对象了

先给客户端代码

JavaScript

var ws = new WebSocket(“ws://xxx.xxx.xxx.xxx:8888”); ws.onopen =
function(){ console.log(“握手成功”); }; ws.onmessage = function(e) { var
reader = new FileReader(); reader.onload = function(event) { var
contents = event.target.result; var a = new Image(); a.src = contents;
document.body.appendChild(a); } reader.readAsDataURL(e.data); };

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
var ws = new WebSocket("ws://xxx.xxx.xxx.xxx:8888");
 
ws.onopen = function(){
    console.log("握手成功");
};
 
ws.onmessage = function(e) {
    var reader = new FileReader();
    reader.onload = function(event) {
        var contents = event.target.result;
        var a = new Image();
        a.src = contents;
        document.body.appendChild(a);
    }
    reader.readAsDataURL(e.data);
};

选用到音讯,然后readAsDataURL,直接将图片base64添加到页面中

转到服务器端代码

JavaScript

fs.readdir(“skyland”, function(err, files) { if(err) { throw err; }
for(var i = 0; i < files.length; i++) { fs.readFile(‘skyland/’ +
files[i], function(err, data) { if(err) { throw err; }
o.write(encodeImgFrame(data)); }); } }); function encodeImgFrame(buf) {
var s = [], l = buf.length, ret = []; s.push((1 << 7) + 2);
if(l < 126) { s.push(l); } else if(l < 0x10000) { s.push(126,
(l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0, 0, 0,
(l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00)
>> 8, l&0xFF); } return Buffer.concat([new Buffer(s), buf]); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
fs.readdir("skyland", function(err, files) {
if(err) {
throw err;
}
for(var i = 0; i < files.length; i++) {
fs.readFile(‘skyland/’ + files[i], function(err, data) {
if(err) {
throw err;
}
 
o.write(encodeImgFrame(data));
});
}
});
 
function encodeImgFrame(buf) {
var s = [],
l = buf.length,
ret = [];
 
s.push((1 << 7) + 2);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), buf]);
}

注意s.push((1 << 7) +
2)
这一句,那里非凡直接把opcode写死了为2,对于Binary
Frame,那样客户端接收到数量是不会尝试举行toString的,否则会报错~

代码很简短,在那里向大家享受一下websocket传输图片的进程怎样

测试很多张图纸,总共8.24M

一般静态资源服务器须求20s左右(服务器较远)

cdn需要2.8s左右

那大家的websocket格局啊??!

答案是均等必要20s左右,是否很失望……速度就是慢在传输上,并不是服务器读取图片,本机上同一的图片资源,1s左右方可做到……那样看来数据流也无从冲破距离的范围提升传输速度

上面大家来探望websocket的另一个用法~

 

用websocket搭建语音聊天室

先来整理一下口音聊天室的成效

用户进入频道随后从麦克风输入音频,然后发送给后台转载给频道里面的其余人,其外人接收到音信举办播报

看起来困难在五个地点,首个是音频的输入,第二是收取到数码流举办播放

先说音频的输入,那里运用了HTML5的getUserMedia方法,可是注意了,这么些方法上线是有大坑的,最终说,先贴代码

JavaScript

if (navigator.getUserMedia) { navigator.getUserMedia( { audio: true },
function (stream) { var rec = new SRecorder(stream); recorder = rec; })
}

1
2
3
4
5
6
7
8
if (navigator.getUserMedia) {
    navigator.getUserMedia(
        { audio: true },
        function (stream) {
            var rec = new SRecorder(stream);
            recorder = rec;
        })
}

第三个参数是{audio:
true},只启用音频,然后创制了一个SRecorder对象,后续的操作基本上都在这几个目的上进展。此时假使代码运行在该地的话浏览器应该擢升您是或不是启用Mike风输入,确定之后就开行了

接下去大家看下SRecorder构造函数是吗,给出主要的有的

JavaScript

var SRecorder = function(stream) { …… var context = new AudioContext();
var audioInput = context.createMediaStreamSource(stream); var recorder =
context.createScriptProcessor(4096, 1, 1); …… }

1
2
3
4
5
6
7
var SRecorder = function(stream) {
    ……
   var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
    ……
}

奥迪(Audi)oContext是一个旋律上下文对象,有做过声音过滤处理的同班应该明白“一段音频到达扬声器进行广播此前,半路对其开展拦截,于是大家就收获了拍子数据了,那些拦截工作是由window.奥迪(Audi)oContext来做的,大家富有对旋律的操作都按照这一个目的”,大家得以因而奥迪(Audi)oContext成立分化的奥迪oNode节点,然后添加滤镜播放更加的声音

录音原理一样,我们也要求走奥迪oContext,可是多了一步对Mike风音频输入的接纳上,而不是像过去处理音频一下用ajax请求音频的ArrayBuffer对象再decode,Mike风的收受需要用到createMediaStreamSource方法,注意那一个参数就是getUserMedia方法第一个参数的参数

再则createScriptProcessor方法,它官方的解释是:

Creates a ScriptProcessorNode, which can be used for direct audio
processing via JavaScript.

——————

包蕴下就是其一点子是应用JavaScript去处理音频采集操作

好不简单到点子采集了!胜利就在眼前!

接下去让大家把话筒的输入和拍子采集相连起来

JavaScript

audioInput.connect(recorder); recorder.connect(context.destination);

1
2
audioInput.connect(recorder);
recorder.connect(context.destination);

context.destination官方解释如下

The destination property of
the AudioContext interface
returns
an AudioDestinationNoderepresenting
the final destination of all audio in the context.

——————

context.destination重返代表在环境中的音频的末段目标地。

好,到了那儿,我们还索要一个监听音频采集的轩然大波

JavaScript

recorder.onaudioprocess = function (e) {
audioData.input(e.inputBuffer.getChannelData(0)); }

1
2
3
recorder.onaudioprocess = function (e) {
    audioData.input(e.inputBuffer.getChannelData(0));
}

audioData是一个目的,那么些是在网上找的,我就加了一个clear方法因为后边会用到,主要有分外encodeWAV方法很赞,外人举行了累累的节拍压缩和优化,这几个最后会伴随完整的代码一起贴出来

这儿全部用户进入频道随后从Mike风输入音频环节就已经完毕啦,上边就该是向服务器端发送音频流,稍微有点蛋疼的来了,刚才我们说了,websocket通过opcode分歧可以象征回去的多寡是文件照旧二进制数据,而我辈onaudioprocess中input进去的是数组,最终播放声音须求的是Blob,{type:
‘audio/wav’}的目标,那样大家就务要求在殡葬之前将数组转换成WAV的Blob,此时就用到了地点说的encodeWAV方法

服务器似乎很简短,只要转载就行了

地点测试确实可以,然则天坑来了!将先后跑在服务器上时候调用getUserMedia方法提醒我必须在一个安然无恙的环境,也就是索要https,那表示ws也非得换成wss……于是服务器代码就从未应用大家协调包裹的拉手、解析和编码了,代码如下

JavaScript

var https = require(‘https’); var fs = require(‘fs’); var ws =
require(‘ws’); var userMap = Object.create(null); var options = { key:
fs.readFileSync(‘./privatekey.pem’), cert:
fs.readFileSync(‘./certificate.pem’) }; var server =
https.createServer(options, function(req, res) { res.writeHead({
‘Content-Type’ : ‘text/html’ }); fs.readFile(‘./testaudio.html’,
function(err, data) { if(err) { return ; } res.end(data); }); }); var
wss = new ws.Server({server: server}); wss.on(‘connection’, function(o)
{ o.on(‘message’, function(message) { if(message.indexOf(‘user’) === 0)
{ var user = message.split(‘:’)[1]; userMap[user] = o; } else {
for(var u in userMap) { userMap[u].send(message); } } }); });
server.listen(8888);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
var https = require(‘https’);
var fs = require(‘fs’);
var ws = require(‘ws’);
var userMap = Object.create(null);
var options = {
    key: fs.readFileSync(‘./privatekey.pem’),
    cert: fs.readFileSync(‘./certificate.pem’)
};
var server = https.createServer(options, function(req, res) {
    res.writeHead({
        ‘Content-Type’ : ‘text/html’
    });
 
    fs.readFile(‘./testaudio.html’, function(err, data) {
        if(err) {
            return ;
        }
 
        res.end(data);
    });
});
 
var wss = new ws.Server({server: server});
 
wss.on(‘connection’, function(o) {
    o.on(‘message’, function(message) {
if(message.indexOf(‘user’) === 0) {
    var user = message.split(‘:’)[1];
    userMap[user] = o;
} else {
    for(var u in userMap) {
userMap[u].send(message);
    }
}
    });
});
 
server.listen(8888);

代码依旧很粗略的,使用https模块,然后用了开班说的ws模块,userMap是模拟的频道,只兑现转载的骨干职能

选取ws模块是因为它卓绝https达成wss实在是太有利了,和逻辑代码0争辨

https的搭建在这边就不提了,重假若亟需私钥、CSR证书签名和证书文件,感兴趣的同桌可以精晓下(可是不打听的话在现网环境也用持续getUserMedia……)

上面是完好的前端代码

JavaScript

var a = document.getElementById(‘a’); var b =
document.getElementById(‘b’); var c = document.getElementById(‘c’);
navigator.getUserMedia = navigator.getUserMedia ||
navigator.webkitGetUserMedia; var gRecorder = null; var audio =
document.querySelector(‘audio’); var door = false; var ws = null;
b.onclick = function() { if(a.value === ”) { alert(‘请输入用户名’);
return false; } if(!navigator.getUserMedia) {
alert(‘抱歉您的设施无拉脱维亚语音聊天’); return false; }
SRecorder.get(function (rec) { gRecorder = rec; }); ws = new
WebSocket(“wss://x.x.x.x:8888”); ws.onopen = function() {
console.log(‘握手成功’); ws.send(‘user:’ + a.value); }; ws.onmessage =
function(e) { receive(e.data); }; document.onkeydown = function(e) {
if(e.keyCode === 65) { if(!door) { gRecorder.start(); door = true; } }
}; document.onkeyup = function(e) { if(e.keyCode === 65) { if(door) {
ws.send(gRecorder.getBlob()); gRecorder.clear(); gRecorder.stop(); door
= false; } } } } c.onclick = function() { if(ws) { ws.close(); } } var
SRecorder = function(stream) { config = {}; config.sampleBits =
config.smapleBits || 8; config.sampleRate = config.sampleRate || (44100
/ 6); var context = new 奥迪oContext(); var audioInput =
context.createMediaStreamSource(stream); var recorder =
context.createScriptProcessor(4096, 1, 1); var audioData = { size: 0
//录音文件长度 , buffer: [] //录音缓存 , inputSampleRate:
context.sampleRate //输入采样率 , input萨姆pleBits: 16 //输入采样数位 8,
16 , outputSampleRate: config.sampleRate //输出采样率 , oututSampleBits:
config.sampleBits //输出采样数位 8, 16 , clear: function() { this.buffer
= []; this.size = 0; } , input: function (data) { this.buffer.push(new
Float32Array(data)); this.size += data.length; } , compress: function ()
{ //合并压缩 //合并 var data = new Float32Array(this.size); var offset =
0; for (var i = 0; i < this.buffer.length; i++) {
data.set(this.buffer[i], offset); offset += this.buffer[i].length; }
//压缩 var compression = parseInt(this.inputSampleRate /
this.outputSampleRate); var length = data.length / compression; var
result = new Float32Array(length); var index = 0, j = 0; while (index
< length) { result[index] = data[j]; j += compression; index++; }
return result; } , encodeWAV: function () { var sampleRate =
Math.min(this.inputSampleRate, this.outputSampleRate); var sampleBits =
Math.min(this.inputSampleBits, this.oututSampleBits); var bytes =
this.compress(); var dataLength = bytes.length * (sampleBits / 8); var
buffer = new ArrayBuffer(44 + dataLength); var data = new
DataView(buffer); var channelCount = 1;//单声道 var offset = 0; var
writeString = function (str) { for (var i = 0; i < str.length; i++) {
data.setUint8(offset + i, str.charCodeAt(i)); } }; // 资源互换文件标识符
writeString(‘RIFF’); offset += 4; //
下个地方起始到文件尾总字节数,即文件大小-8 data.setUint32(offset, 36 +
dataLength, true); offset += 4; // WAV文件申明 writeString(‘WAVE’);
offset += 4; // 波形格式标志 writeString(‘fmt ‘); offset += 4; //
过滤字节,一般为 0x10 = 16 data.setUint32(offset, 16, true); offset += 4;
// 格式种类 (PCM格局采样数据) data.setUint16(offset, 1, true); offset +=
2; // 通道数 data.setUint16(offset, channelCount, true); offset += 2; //
采样率,每秒样本数,表示每个通道的广播速度 data.setUint32(offset,
sampleRate, true); offset += 4; // 波形数据传输率 (每秒平均字节数)
单声道×每秒数据位数×每样本数据位/8 data.setUint32(offset, channelCount
* sampleRate * (sampleBits / 8), true); offset += 4; // 快数据调整数
采样五次占用字节数 单声道×每样本的数额位数/8 data.setUint16(offset,
channelCount * (sampleBits / 8), true); offset += 2; // 每样本数量位数
data.setUint16(offset, sampleBits, true); offset += 2; // 数据标识符
writeString(‘data’); offset += 4; // 采样数据总数,即数据总大小-44
data.setUint32(offset, dataLength, true); offset += 4; // 写入采样数据
if (sampleBits === 8) { for (var i = 0; i < bytes.length; i++,
offset++) { var s = Math.max(-1, Math.min(1, bytes[i])); var val = s
< 0 ? s * 0x8000 : s * 0x7FFF; val = parseInt(255 / (65535 / (val +
32768))); data.setInt8(offset, val, true); } } else { for (var i = 0; i
< bytes.length; i++, offset += 2) { var s = Math.max(-1, Math.min(1,
bytes[i])); data.setInt16(offset, s < 0 ? s * 0x8000 : s *
0x7FFF, true); } } return new Blob([data], { type: ‘audio/wav’ }); }
}; this.start = function () { audioInput.connect(recorder);
recorder.connect(context.destination); } this.stop = function () {
recorder.disconnect(); } this.getBlob = function () { return
audioData.encodeWAV(); } this.clear = function() { audioData.clear(); }
recorder.onaudioprocess = function (e) {
audioData.input(e.inputBuffer.getChannelData(0)); } }; SRecorder.get =
function (callback) { if (callback) { if (navigator.getUserMedia) {
navigator.getUserMedia( { audio: true }, function (stream) { var rec =
new SRecorder(stream); callback(rec); }) } } } function receive(e) {
audio.src = window.URL.createObjectURL(e); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
var a = document.getElementById(‘a’);
var b = document.getElementById(‘b’);
var c = document.getElementById(‘c’);
 
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia;
 
var gRecorder = null;
var audio = document.querySelector(‘audio’);
var door = false;
var ws = null;
 
b.onclick = function() {
    if(a.value === ”) {
        alert(‘请输入用户名’);
        return false;
    }
    if(!navigator.getUserMedia) {
        alert(‘抱歉您的设备无法语音聊天’);
        return false;
    }
 
    SRecorder.get(function (rec) {
        gRecorder = rec;
    });
 
    ws = new WebSocket("wss://x.x.x.x:8888");
 
    ws.onopen = function() {
        console.log(‘握手成功’);
        ws.send(‘user:’ + a.value);
    };
 
    ws.onmessage = function(e) {
        receive(e.data);
    };
 
    document.onkeydown = function(e) {
        if(e.keyCode === 65) {
            if(!door) {
                gRecorder.start();
                door = true;
            }
        }
    };
 
    document.onkeyup = function(e) {
        if(e.keyCode === 65) {
            if(door) {
                ws.send(gRecorder.getBlob());
                gRecorder.clear();
                gRecorder.stop();
                door = false;
            }
        }
    }
}
 
c.onclick = function() {
    if(ws) {
        ws.close();
    }
}
 
var SRecorder = function(stream) {
    config = {};
 
    config.sampleBits = config.smapleBits || 8;
    config.sampleRate = config.sampleRate || (44100 / 6);
 
    var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
 
    var audioData = {
        size: 0          //录音文件长度
        , buffer: []     //录音缓存
        , inputSampleRate: context.sampleRate    //输入采样率
        , inputSampleBits: 16       //输入采样数位 8, 16
        , outputSampleRate: config.sampleRate    //输出采样率
        , oututSampleBits: config.sampleBits       //输出采样数位 8, 16
        , clear: function() {
            this.buffer = [];
            this.size = 0;
        }
        , input: function (data) {
            this.buffer.push(new Float32Array(data));
            this.size += data.length;
        }
        , compress: function () { //合并压缩
            //合并
            var data = new Float32Array(this.size);
            var offset = 0;
            for (var i = 0; i < this.buffer.length; i++) {
                data.set(this.buffer[i], offset);
                offset += this.buffer[i].length;
            }
            //压缩
            var compression = parseInt(this.inputSampleRate / this.outputSampleRate);
            var length = data.length / compression;
            var result = new Float32Array(length);
            var index = 0, j = 0;
            while (index < length) {
                result[index] = data[j];
                j += compression;
                index++;
            }
            return result;
        }
        , encodeWAV: function () {
            var sampleRate = Math.min(this.inputSampleRate, this.outputSampleRate);
            var sampleBits = Math.min(this.inputSampleBits, this.oututSampleBits);
            var bytes = this.compress();
            var dataLength = bytes.length * (sampleBits / 8);
            var buffer = new ArrayBuffer(44 + dataLength);
            var data = new DataView(buffer);
 
            var channelCount = 1;//单声道
            var offset = 0;
 
            var writeString = function (str) {
                for (var i = 0; i < str.length; i++) {
                    data.setUint8(offset + i, str.charCodeAt(i));
                }
            };
 
            // 资源交换文件标识符
            writeString(‘RIFF’); offset += 4;
            // 下个地址开始到文件尾总字节数,即文件大小-8
            data.setUint32(offset, 36 + dataLength, true); offset += 4;
            // WAV文件标志
            writeString(‘WAVE’); offset += 4;
            // 波形格式标志
            writeString(‘fmt ‘); offset += 4;
            // 过滤字节,一般为 0x10 = 16
            data.setUint32(offset, 16, true); offset += 4;
            // 格式类别 (PCM形式采样数据)
            data.setUint16(offset, 1, true); offset += 2;
            // 通道数
            data.setUint16(offset, channelCount, true); offset += 2;
            // 采样率,每秒样本数,表示每个通道的播放速度
            data.setUint32(offset, sampleRate, true); offset += 4;
            // 波形数据传输率 (每秒平均字节数) 单声道×每秒数据位数×每样本数据位/8
            data.setUint32(offset, channelCount * sampleRate * (sampleBits / 8), true); offset += 4;
            // 快数据调整数 采样一次占用字节数 单声道×每样本的数据位数/8
            data.setUint16(offset, channelCount * (sampleBits / 8), true); offset += 2;
            // 每样本数据位数
            data.setUint16(offset, sampleBits, true); offset += 2;
            // 数据标识符
            writeString(‘data’); offset += 4;
            // 采样数据总数,即数据总大小-44
            data.setUint32(offset, dataLength, true); offset += 4;
            // 写入采样数据
            if (sampleBits === 8) {
                for (var i = 0; i < bytes.length; i++, offset++) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    var val = s < 0 ? s * 0x8000 : s * 0x7FFF;
                    val = parseInt(255 / (65535 / (val + 32768)));
                    data.setInt8(offset, val, true);
                }
            } else {
                for (var i = 0; i < bytes.length; i++, offset += 2) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    data.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true);
                }
            }
 
            return new Blob([data], { type: ‘audio/wav’ });
        }
    };
 
    this.start = function () {
        audioInput.connect(recorder);
        recorder.connect(context.destination);
    }
 
    this.stop = function () {
        recorder.disconnect();
    }
 
    this.getBlob = function () {
        return audioData.encodeWAV();
    }
 
    this.clear = function() {
        audioData.clear();
    }
 
    recorder.onaudioprocess = function (e) {
        audioData.input(e.inputBuffer.getChannelData(0));
    }
};
 
SRecorder.get = function (callback) {
    if (callback) {
        if (navigator.getUserMedia) {
            navigator.getUserMedia(
                { audio: true },
                function (stream) {
                    var rec = new SRecorder(stream);
                    callback(rec);
                })
        }
    }
}
 
function receive(e) {
    audio.src = window.URL.createObjectURL(e);
}

注意:按住a键说话,放开a键发送

投机有品味不按键实时对讲,通过setInterval发送,但意识杂音有点重,效果不好,那几个须要encodeWAV再一层的包装,多去除环境杂音的功用,自己挑选了越来越便捷的按键说话的方式

 

这篇小说里首先展望了websocket的前景,然后根据标准我们团结一心尝试解析和变化数据帧,对websocket有了更深一步的打听

终极经过多少个demo看到了websocket的潜力,关于语音聊天室的demo涉及的较广,没有接触过奥迪oContext对象的同室最好先了然下奥迪(Audi)oContext

文章到这里就得了啦~有如何想法和难题欢迎大家提出来一起研讨探索~

 

1 赞 11 收藏 3
评论

皇家赌场手机版 6

原稿出处:
AlloyTeam   

初稿出处:
AlloyTeam   

原稿出处:
AlloyTeam   

说到websocket想比大家不会陌生,假若陌生的话也没提到,一句话概括

说到websocket想比我们不会陌生,假若陌生的话也没涉及,一句话概括

说到websocket想比大家不会陌生,借使陌生的话也没涉及,一句话概括

“WebSocket protocol
是HTML5一种新的说道。它完结了浏览器与服务器全双工通讯”

“WebSocket protocol
是HTML5一种新的情商。它已毕了浏览器与服务器全双工通信”

“WebSocket protocol
是HTML5一种新的说道。它完毕了浏览器与服务器全双工通讯”

WebSocket相比较传统那些服务器推技术几乎好了太多,大家可以挥手向comet和长轮询那几个技巧说拜拜啦,庆幸我们生存在有着HTML5的时期~

WebSocket相相比较传统那么些服务器推技术大概好了太多,大家得以挥手向comet和长轮询这几个技术说拜拜啦,庆幸大家生存在富有HTML5的时日~

WebSocket相相比传统这几个服务器推技术大约好了太多,我们得以挥手向comet和长轮询那个技巧说拜拜啦,庆幸大家生存在享有HTML5的一时~

那篇文章大家将分三部分探索websocket

那篇文章大家将分三局地探索websocket

那篇作品我们将分三部分探索websocket

第一是websocket的广泛使用,其次是一心自己制作服务器端websocket,最后是至关紧要介绍利用websocket制作的四个demo,传输图片和在线语音聊天室,let’s
go

第一是websocket的常见使用,其次是截然自己打造服务器端websocket,最后是重大介绍利用websocket制作的多少个demo,传输图片和在线语音聊天室,let’s
go

率先是websocket的宽泛使用,其次是一心自己制作服务器端websocket,最后是至关紧要介绍利用websocket制作的三个demo,传输图片和在线语音聊天室,let’s
go

一、websocket常见用法

一、websocket常见用法

一、websocket常见用法

那里介绍三种自己觉得大规模的websocket完结……(留意:本文建立在node上下文环境

那边介绍二种自己认为大规模的websocket达成……(专注:本文建立在node上下文环境

那边介绍两种自我以为大规模的websocket完结……(留神:本文建立在node上下文环境websocket探索其与话音。)

1、socket.io

1、socket.io

1、socket.io

先给demo

先给demo

先给demo

JavaScript

JavaScript

JavaScript

var http = require(‘http’); var io = require(‘socket.io’); var server =
http.createServer(function(req, res) { res.writeHeader(200,
{‘content-type’: ‘text/html;charset=”utf-8″‘}); res.end();
}).listen(8888); var socket =.io.listen(server);
socket.sockets.on(‘connection’, function(socket) { socket.emit(‘xxx’,
{options}); socket.on(‘xxx’, function(data) { // do someting }); });

var http = require(‘http’); var io = require(‘socket.io’); var server =
http.createServer(function(req, res) { res.writeHeader(200,
{‘content-type’: ‘text/html;charset=”utf-8″‘}); res.end();
}).listen(8888); var socket =.io.listen(server);
socket.sockets.on(‘connection’, function(socket) { socket.emit(‘xxx’,
{options}); socket.on(‘xxx’, function(data) { // do someting }); });

var http = require(‘http’); var io = require(‘socket.io’); var server =
http.createServer(function(req, res) { res.writeHeader(200,
{‘content-type’: ‘text/html;charset=”utf-8″‘}); res.end();
}).listen(8888); var socket =.io.listen(server);
socket.sockets.on(‘connection’, function(socket) { socket.emit(‘xxx’,
{options}); socket.on(‘xxx’, function(data) { // do someting }); });

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
var http = require(‘http’);
var io = require(‘socket.io’);
 
var server = http.createServer(function(req, res) {
    res.writeHeader(200, {‘content-type’: ‘text/html;charset="utf-8"’});
    res.end();
}).listen(8888);
 
var socket =.io.listen(server);
 
socket.sockets.on(‘connection’, function(socket) {
    socket.emit(‘xxx’, {options});
 
    socket.on(‘xxx’, function(data) {
        // do someting
    });
});
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
var http = require(‘http’);
var io = require(‘socket.io’);
 
var server = http.createServer(function(req, res) {
    res.writeHeader(200, {‘content-type’: ‘text/html;charset="utf-8"’});
    res.end();
}).listen(8888);
 
var socket =.io.listen(server);
 
socket.sockets.on(‘connection’, function(socket) {
    socket.emit(‘xxx’, {options});
 
    socket.on(‘xxx’, function(data) {
        // do someting
    });
});
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
var http = require(‘http’);
var io = require(‘socket.io’);
 
var server = http.createServer(function(req, res) {
    res.writeHeader(200, {‘content-type’: ‘text/html;charset="utf-8"’});
    res.end();
}).listen(8888);
 
var socket =.io.listen(server);
 
socket.sockets.on(‘connection’, function(socket) {
    socket.emit(‘xxx’, {options});
 
    socket.on(‘xxx’, function(data) {
        // do someting
    });
});

相信明白websocket的同窗不容许不知晓socket.io,因为socket.io太闻明了,也很棒,它自身对过期、握手等都做了拍卖。我预计那也是促成websocket使用最多的措施。socket.io最最最美好的某些就是优雅降级,当浏览器不匡助websocket时,它会在其中优雅降级为长轮询等,用户和开发者是不要求关心具体贯彻的,很便宜。

相信明白websocket的校友不容许不理解socket.io,因为socket.io太知名了,也很棒,它自身对逾期、握手等都做了拍卖。我猜度那也是完毕websocket使用最多的法子。socket.io最最最精美的一些就是优雅降级,当浏览器不辅助websocket时,它会在其间优雅降级为长轮询等,用户和开发者是不须要关心具体落实的,很便利。

相信了然websocket的同桌不容许不明白socket.io,因为socket.io太盛名了,也很棒,它本身对逾期、握手等都做了拍卖。我揣测那也是兑现websocket使用最多的法门。socket.io最最最美丽的一些就是优雅降级,当浏览器不支持websocket时,它会在里边优雅降级为长轮询等,用户和开发者是不须要关爱具体已毕的,很有利。

可是工作是有两面性的,socket.io因为它的通盘也拉动了坑的地方,最要紧的就是臃肿,它的卷入也给多少拉动了较多的报道冗余,而且优雅降级这一独到之处,也陪伴浏览器标准化的开展逐级失去了了不起

只是工作是有两面性的,socket.io因为它的不可偏废也推动了坑的地点,最重点的就是臃肿,它的包装也给多少牵动了较多的简报冗余,而且优雅降级这一优点,也伴随浏览器标准化的展开逐步失去了了不起

唯独工作是有两面性的,socket.io因为它的健全也拉动了坑的地点,最重视的就是臃肿,它的包裹也给多少拉动了较多的通信冗余,而且优雅降级这一亮点,也随同浏览器标准化的拓展逐级失去了惊天动地

Chrome Supported in version 4+
Firefox Supported in version 4+
Internet Explorer Supported in version 10+
Opera Supported in version 10+
Safari Supported in version 5+
Chrome Supported in version 4+
Firefox Supported in version 4+
Internet Explorer Supported in version 10+
Opera Supported in version 10+
Safari Supported in version 5+
Chrome Supported in version 4+
Firefox Supported in version 4+
Internet Explorer Supported in version 10+
Opera Supported in version 10+
Safari Supported in version 5+

在此间不是指责说socket.io不好,已经被淘汰了,而是有时候大家也能够考虑部分其余的贯彻~

在此处不是指责说socket.io不好,已经被淘汰了,而是有时候大家也能够考虑部分任何的落到实处~

在那里不是指责说socket.io不佳,已经被淘汰了,而是有时候大家也足以考虑部分其余的兑现~

 

 

 

2、http模块

2、http模块

2、http模块

凑巧说了socket.io臃肿,那现在就来说说便捷的,首先demo

正巧说了socket.io臃肿,那现在就来说说便捷的,首先demo

恰恰说了socket.io臃肿,那现在就来说说便捷的,首先demo

JavaScript

JavaScript

JavaScript

var http = require(‘http’); var server = http.createServer();
server.on(‘upgrade’, function(req) { console.log(req.headers); });
server.listen(8888);

var http = require(‘http’); var server = http.createServer();
server.on(‘upgrade’, function(req) { console.log(req.headers); });
server.listen(8888);

var http = require(‘http’); var server = http.createServer();
server.on(‘upgrade’, function(req) { console.log(req.headers); });
server.listen(8888);

1
2
3
4
5
6
var http = require(‘http’);
var server = http.createServer();
server.on(‘upgrade’, function(req) {
console.log(req.headers);
});
server.listen(8888);
1
2
3
4
5
6
var http = require(‘http’);
var server = http.createServer();
server.on(‘upgrade’, function(req) {
console.log(req.headers);
});
server.listen(8888);
1
2
3
4
5
6
var http = require(‘http’);
var server = http.createServer();
server.on(‘upgrade’, function(req) {
console.log(req.headers);
});
server.listen(8888);

很简单的完毕,其实socket.io内部对websocket也是这么完结的,但是前边帮大家封装了一些handle处理,那里大家也可以自己去丰硕,给出两张socket.io中的源码图

很粗略的兑现,其实socket.io内部对websocket也是这么已毕的,不过后面帮大家封装了一部分handle处理,那里我们也足以协调去丰富,给出两张socket.io中的源码图

很简短的兑现,其实socket.io内部对websocket也是如此完结的,不过前面帮大家封装了有些handle处理,那里大家也得以协调去丰盛,给出两张socket.io中的源码图

皇家赌场手机版 7

皇家赌场手机版 8

皇家赌场手机版 9

皇家赌场手机版 10

皇家赌场手机版 11

皇家赌场手机版 12

 

 

 

3、ws模块

3、ws模块

3、ws模块

末尾有个例证会用到,那里就提一下,前面具体看~

末端有个例证会用到,那里就提一下,后边具体看~

末端有个例子会用到,那里就提一下,后边具体看~

 

 

 

二、自己落成一套server端websocket

二、自己落成一套server端websocket

二、自己完结一套server端websocket

凑巧说了二种常见的websocket达成格局,现在大家想想,对于开发者来说

正好说了三种常见的websocket达成形式,现在大家寻思,对于开发者来说

司空见惯说了三种普遍的websocket落成形式,现在大家寻思,对于开发者来说

websocket相对于传统http数据交互方式以来,伸张了服务器推送的轩然大波,客户端接收到事件再开展对应处理,开发起来分歧并不是太大呀

websocket相对于传统http数据交互形式以来,增加了服务器推送的风云,客户端接收到事件再展开相应处理,开发起来不一样并不是太大呀

websocket相对于传统http数据交互方式以来,增加了服务器推送的事件,客户端接收到事件再开展对应处理,开发起来差别并不是太大呀

那是因为那么些模块已经帮大家将数据帧解析此间的坑都填好了,第二片段大家将尝试自己创设一套简便的服务器端websocket模块

那是因为那么些模块已经帮大家将数量帧解析此地的坑都填好了,第二局地我们将尝试自己制作一套简便的服务器端websocket模块

那是因为那么些模块已经帮大家将数量帧解析这里的坑都填好了,第二有些大家将尝试自己制作一套简便的服务器端websocket模块

谢谢次碳酸钴的钻研协理,自身在此间那有的只是不难说下,若是对此有趣味好奇的请百度【web技术研究所】

谢谢次碳酸钴的研究辅助,自身在此处那有些只是简短说下,固然对此有趣味好奇的请百度【web技术商量所】

感谢次碳酸钴的啄磨扶助,自身在那边这一部分只是简单说下,如若对此有趣味好奇的请百度【web技术研商所】

团结形成服务器端websocket主要有两点,一个是利用net模块接受数据流,还有一个是相对而言官方的帧结构图解析数据,完毕那两部分就早已成功了全副的最底层工作

团结成功服务器端websocket主要有两点,一个是运用net模块接受数据流,还有一个是比照官方的帧结构图解析数据,落成这两片段就早已做到了方方面面的最底层工作

友好姣好服务器端websocket首要有两点,一个是行使net模块接受数据流,还有一个是比较官方的帧结构图解析数据,完结那两有些就曾经落成了整套的底层工作

首先给一个客户端发送websocket握手报文的抓包内容

先是给一个客户端发送websocket握手报文的抓包内容

率先给一个客户端发送websocket握手报文的抓包内容

客户端代码很简单

客户端代码很粗略

客户端代码很不难

JavaScript

JavaScript

JavaScript

ws = new WebSocket(“ws://127.0.0.1:8888”);

ws = new WebSocket(“ws://127.0.0.1:8888”);

ws = new WebSocket(“ws://127.0.0.1:8888”);

1
ws = new WebSocket("ws://127.0.0.1:8888");
1
ws = new WebSocket("ws://127.0.0.1:8888");
1
ws = new WebSocket("ws://127.0.0.1:8888");

皇家赌场手机版 13

皇家赌场手机版 14

皇家赌场手机版 15

劳动器端要指向那么些key验证,就是讲key加上一个一定的字符串后做一次sha1运算,将其结果转换为base64送回来

劳动器端要针对那些key验证,就是讲key加上一个一定的字符串后做四遍sha1运算,将其结果转换为base64送回到

劳动器端要对准这一个key验证,就是讲key加上一个一定的字符串后做一回sha1运算,将其结果转换为base64送回去

JavaScript

JavaScript

JavaScript

var crypto = require(‘crypto’); var WS =
‘258EAFA5-E914-47DA-95CA-C5AB0DC85B11’;
require(‘net’).createServer(function(o) { var key;
o.on(‘data’,function(e) { if(!key) { // 获取发送过来的KEY key =
e.toString().match(/Sec-WebSocket-Key: (.+)/)[1]; //
连接上WS那么些字符串,并做四遍sha1运算,最后转换成Base64 key =
crypto.createHash(‘sha1’).update(key+WS).digest(‘base64’); //
输出再次来到给客户端的数额,这一个字段都是必须的 o.write(‘HTTP/1.1 101
Switching Protocols\r\n’); o.write(‘Upgrade: websocket\r\n’);
o.write(‘Connection: Upgrade\r\n’); // 这几个字段带上服务器处理后的KEY
o.write(‘Sec-WebSocket-Accept: ‘+key+’\r\n’); //
输出空行,使HTTP头截至 o.write(‘\r\n’); } }); }).listen(8888);

var crypto = require(‘crypto’); var WS =
‘258EAFA5-E914-47DA-95CA-C5AB0DC85B11’;
require(‘net’).createServer(function(o) { var key;
o.on(‘data’,function(e) { if(!key) { // 获取发送过来的KEY key =
e.toString().match(/Sec-WebSocket-Key: (.+)/)[1]; //
连接上WS那几个字符串,并做一回sha1运算,最终转换成Base64 key =
crypto.createHash(‘sha1’).update(key+WS).digest(‘base64’); //
输出再次来到给客户端的数额,那个字段都是必须的 o.write(‘HTTP/1.1 101
Switching Protocols\r\n’); o.write(‘Upgrade: websocket\r\n’);
o.write(‘Connection: Upgrade\r\n’); // 那么些字段带上服务器处理后的KEY
o.write(‘Sec-WebSocket-Accept: ‘+key+’\r\n’); //
输出空行,使HTTP头甘休 o.write(‘\r\n’); } }); }).listen(8888);

var crypto = require(‘crypto’); var WS =
‘258EAFA5-E914-47DA-95CA-C5AB0DC85B11’;
require(‘net’).createServer(function(o) { var key;
o.on(‘data’,function(e) { if(!key) { // 获取发送过来的KEY key =
e.toString().match(/Sec-WebSocket-Key: (.+)/)[1]; //
连接上WS这些字符串,并做一遍sha1运算,最终转换成Base64 key =
crypto.createHash(‘sha1’).update(key+WS).digest(‘base64’); //
输出重回给客户端的数码,那么些字段都是必须的 o.write(‘HTTP/1.1 101
Switching Protocols\r\n’); o.write(‘Upgrade: websocket\r\n’);
o.write(‘Connection: Upgrade\r\n’); // 那些字段带上服务器处理后的KEY
o.write(‘Sec-WebSocket-Accept: ‘+key+’\r\n’); //
输出空行,使HTTP头为止 o.write(‘\r\n’); } }); }).listen(8888);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
var crypto = require(‘crypto’);
var WS = ‘258EAFA5-E914-47DA-95CA-C5AB0DC85B11’;
 
require(‘net’).createServer(function(o) {
var key;
o.on(‘data’,function(e) {
if(!key) {
// 获取发送过来的KEY
key = e.toString().match(/Sec-WebSocket-Key: (.+)/)[1];
// 连接上WS这个字符串,并做一次sha1运算,最后转换成Base64
key = crypto.createHash(‘sha1’).update(key+WS).digest(‘base64’);
// 输出返回给客户端的数据,这些字段都是必须的
o.write(‘HTTP/1.1 101 Switching Protocols\r\n’);
o.write(‘Upgrade: websocket\r\n’);
o.write(‘Connection: Upgrade\r\n’);
// 这个字段带上服务器处理后的KEY
o.write(‘Sec-WebSocket-Accept: ‘+key+’\r\n’);
// 输出空行,使HTTP头结束
o.write(‘\r\n’);
}
});
}).listen(8888);
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
var crypto = require(‘crypto’);
var WS = ‘258EAFA5-E914-47DA-95CA-C5AB0DC85B11’;
 
require(‘net’).createServer(function(o) {
var key;
o.on(‘data’,function(e) {
if(!key) {
// 获取发送过来的KEY
key = e.toString().match(/Sec-WebSocket-Key: (.+)/)[1];
// 连接上WS这个字符串,并做一次sha1运算,最后转换成Base64
key = crypto.createHash(‘sha1’).update(key+WS).digest(‘base64’);
// 输出返回给客户端的数据,这些字段都是必须的
o.write(‘HTTP/1.1 101 Switching Protocols\r\n’);
o.write(‘Upgrade: websocket\r\n’);
o.write(‘Connection: Upgrade\r\n’);
// 这个字段带上服务器处理后的KEY
o.write(‘Sec-WebSocket-Accept: ‘+key+’\r\n’);
// 输出空行,使HTTP头结束
o.write(‘\r\n’);
}
});
}).listen(8888);
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
var crypto = require(‘crypto’);
var WS = ‘258EAFA5-E914-47DA-95CA-C5AB0DC85B11’;
 
require(‘net’).createServer(function(o) {
var key;
o.on(‘data’,function(e) {
if(!key) {
// 获取发送过来的KEY
key = e.toString().match(/Sec-WebSocket-Key: (.+)/)[1];
// 连接上WS这个字符串,并做一次sha1运算,最后转换成Base64
key = crypto.createHash(‘sha1’).update(key+WS).digest(‘base64’);
// 输出返回给客户端的数据,这些字段都是必须的
o.write(‘HTTP/1.1 101 Switching Protocols\r\n’);
o.write(‘Upgrade: websocket\r\n’);
o.write(‘Connection: Upgrade\r\n’);
// 这个字段带上服务器处理后的KEY
o.write(‘Sec-WebSocket-Accept: ‘+key+’\r\n’);
// 输出空行,使HTTP头结束
o.write(‘\r\n’);
}
});
}).listen(8888);

如此握手部分就已经到位了,前边就是多少帧解析与转变的活了

这样握手部分就曾经完成了,前面就是多少帧解析与转移的活了

诸如此类握手部分就早已做到了,后边就是多少帧解析与变化的活了

先看下官方提供的帧结构示意图

先看下官方提供的帧结构示意图

先看下官方提供的帧结构示意图

皇家赌场手机版 16

皇家赌场手机版 17

皇家赌场手机版 18

简易介绍下

概括介绍下

几乎介绍下

FIN为是不是得了的标示

FIN为是或不是终止的标记

FIN为是否截止的标志

RSV为预留空间,0

RSV为留住空间,0

RSV为预留空间,0

opcode标识数据类型,是还是不是分片,是或不是二进制解析,心跳包等等

opcode标识数据类型,是还是不是分片,是或不是二进制解析,心跳包等等

opcode标识数据类型,是还是不是分片,是不是二进制解析,心跳包等等

交由一张opcode对应图

提交一张opcode对应图

交付一张opcode对应图

皇家赌场手机版 19

皇家赌场手机版 20

皇家赌场手机版 21

MASK是还是不是选拔掩码

MASK是或不是利用掩码

MASK是或不是选拔掩码

Payload len和后边extend payload length表示数据长度,那些是最劳顿的

Payload len和前面extend payload length表示数据长度,那么些是最麻烦的

Payload len和前面extend payload length表示数据长度,那几个是最劳苦的

PayloadLen唯有7位,换成无符号整型的话只有0到127的取值,这么小的数值当然无法描述较大的多寡,因此确定当数码长度小于或等于125时候它才作为数据长度的叙说,若是这些值为126,则时候背后的多个字节来存储数据长度,假设为127则用前面七个字节来囤积数据长度

PayloadLen只有7位,换成无符号整型的话唯有0到127的取值,这么小的数值当然不可以描述较大的多寡,因而确定当数码长度小于或等于125时候它才作为数据长度的描述,若是这几个值为126,则时候背后的五个字节来囤积数据长度,若是为127则用前面七个字节来储存数据长度

PayloadLen唯有7位,换成无符号整型的话唯有0到127的取值,这么小的数值当然不可以描述较大的多寡,由此确定当数码长度小于或等于125时候它才作为数据长度的叙说,若是这一个值为126,则时候背后的三个字节来储存数据长度,借使为127则用前面四个字节来存储数据长度

Masking-key掩码

Masking-key掩码

Masking-key掩码

上面贴出解析数据帧的代码

上面贴出解析数据帧的代码

下边贴出解析数据帧的代码

JavaScript

JavaScript

JavaScript

function decodeDataFrame(e) { var i = 0, j,s, frame = { FIN: e[i]
>> 7, Opcode: e[i++] & 15, Mask: e[i] >> 7,
PayloadLength: e[i++] & 0x7F }; if(frame.PayloadLength === 126) {
frame.PayloadLength = (e[i++] << 8) + e[i++]; }
if(frame.PayloadLength === 127) { i += 4; frame.PayloadLength =
(e[i++] << 24) + (e[i++] << 16) + (e[i++] << 8)

function decodeDataFrame(e) { var i = 0, j,s, frame = { FIN: e[i]
>> 7, Opcode: e[i++] & 15, Mask: e[i] >> 7,
PayloadLength: e[i++] & 0x7F }; if(frame.PayloadLength === 126) {
frame.PayloadLength = (e[i++] << 8) + e[i++]; }
if(frame.PayloadLength === 127) { i += 4; frame.PayloadLength =
(e[i++] << 24) + (e[i++] << 16) + (e[i++] << 8)

function decodeDataFrame(e) { var i = 0, j,s, frame = { FIN: e[i]
>> 7, Opcode: e[i++] & 15, Mask: e[i] >> 7,
PayloadLength: e[i++] & 0x7F }; if(frame.PayloadLength === 126) {
frame.PayloadLength = (e[i++] << 8) + e[i++]; }
if(frame.PayloadLength === 127) { i += 4; frame.PayloadLength =
(e[i++] << 24) + (e[i++] << 16) + (e[i++] << 8)

  • e[i++]; } if(frame.Mask) { frame.MaskingKey = [e[i++], e[i++],
    e[i++], e[i++]]; for(j = 0, s = []; j < frame.PayloadLength;
    j++) { s.push(e[i+j] ^ frame.MaskingKey[j%4]); } } else { s =
    e.slice(i, i+frame.PayloadLength); } s = new Buffer(s); if(frame.Opcode
    === 1) { s = s.toString(); } frame.PayloadData = s; return frame; }
  • e[i++]; } if(frame.Mask) { frame.MaskingKey = [e[i++], e[i++],
    e[i++], e[i++]]; for(j = 0, s = []; j < frame.PayloadLength;
    j++) { s.push(e[i+j] ^ frame.MaskingKey[j%4]); } } else { s =
    e.slice(i, i+frame.PayloadLength); } s = new Buffer(s); if(frame.Opcode
    === 1) { s = s.toString(); } frame.PayloadData = s; return frame; }
  • e[i++]; } if(frame.Mask) { frame.MaskingKey = [e[i++], e[i++],
    e[i++], e[i++]]; for(j = 0, s = []; j < frame.PayloadLength;
    j++) { s.push(e[i+j] ^ frame.MaskingKey[j%4]); } } else { s =
    e.slice(i, i+frame.PayloadLength); } s = new Buffer(s); if(frame.Opcode
    === 1) { s = s.toString(); } frame.PayloadData = s; return frame; }
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
function decodeDataFrame(e) {
var i = 0,
j,s,
frame = {
FIN: e[i] >> 7,
Opcode: e[i++] & 15,
Mask: e[i] >> 7,
PayloadLength: e[i++] & 0x7F
};
 
if(frame.PayloadLength === 126) {
frame.PayloadLength = (e[i++] << 8) + e[i++];
}
 
if(frame.PayloadLength === 127) {
i += 4;
frame.PayloadLength = (e[i++] << 24) + (e[i++] << 16) + (e[i++] << 8) + e[i++];
}
 
if(frame.Mask) {
frame.MaskingKey = [e[i++], e[i++], e[i++], e[i++]];
 
for(j = 0, s = []; j < frame.PayloadLength; j++) {
s.push(e[i+j] ^ frame.MaskingKey[j%4]);
}
} else {
s = e.slice(i, i+frame.PayloadLength);
}
 
s = new Buffer(s);
 
if(frame.Opcode === 1) {
s = s.toString();
}
 
frame.PayloadData = s;
return frame;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
function decodeDataFrame(e) {
var i = 0,
j,s,
frame = {
FIN: e[i] >> 7,
Opcode: e[i++] & 15,
Mask: e[i] >> 7,
PayloadLength: e[i++] & 0x7F
};
 
if(frame.PayloadLength === 126) {
frame.PayloadLength = (e[i++] << 8) + e[i++];
}
 
if(frame.PayloadLength === 127) {
i += 4;
frame.PayloadLength = (e[i++] << 24) + (e[i++] << 16) + (e[i++] << 8) + e[i++];
}
 
if(frame.Mask) {
frame.MaskingKey = [e[i++], e[i++], e[i++], e[i++]];
 
for(j = 0, s = []; j < frame.PayloadLength; j++) {
s.push(e[i+j] ^ frame.MaskingKey[j%4]);
}
} else {
s = e.slice(i, i+frame.PayloadLength);
}
 
s = new Buffer(s);
 
if(frame.Opcode === 1) {
s = s.toString();
}
 
frame.PayloadData = s;
return frame;
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
function decodeDataFrame(e) {
var i = 0,
j,s,
frame = {
FIN: e[i] >> 7,
Opcode: e[i++] & 15,
Mask: e[i] >> 7,
PayloadLength: e[i++] & 0x7F
};
 
if(frame.PayloadLength === 126) {
frame.PayloadLength = (e[i++] << 8) + e[i++];
}
 
if(frame.PayloadLength === 127) {
i += 4;
frame.PayloadLength = (e[i++] << 24) + (e[i++] << 16) + (e[i++] << 8) + e[i++];
}
 
if(frame.Mask) {
frame.MaskingKey = [e[i++], e[i++], e[i++], e[i++]];
 
for(j = 0, s = []; j < frame.PayloadLength; j++) {
s.push(e[i+j] ^ frame.MaskingKey[j%4]);
}
} else {
s = e.slice(i, i+frame.PayloadLength);
}
 
s = new Buffer(s);
 
if(frame.Opcode === 1) {
s = s.toString();
}
 
frame.PayloadData = s;
return frame;
}

然后是转变数据帧的

下一场是生成数据帧的

然后是转变数据帧的

JavaScript

JavaScript

JavaScript

function encodeDataFrame(e) { var s = [], o = new
Buffer(e.PayloadData), l = o.length; s.push((e.FIN << 7) +
e.Opcode); if(l < 126) { s.push(l); } else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0,
0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00)
>> 8, l&0xFF); } return Buffer.concat([new Buffer(s), o]); }

function encodeDataFrame(e) { var s = [], o = new
Buffer(e.PayloadData), l = o.length; s.push((e.FIN << 7) +
e.Opcode); if(l < 126) { s.push(l); } else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0,
0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00)
>> 8, l&0xFF); } return Buffer.concat([new Buffer(s), o]); }

function encodeDataFrame(e) { var s = [], o = new
Buffer(e.PayloadData), l = o.length; s.push((e.FIN << 7) +
e.Opcode); if(l < 126) { s.push(l); } else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0,
0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00)
>> 8, l&0xFF); } return Buffer.concat([new Buffer(s), o]); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
function encodeDataFrame(e) {
var s = [],
o = new Buffer(e.PayloadData),
l = o.length;
 
s.push((e.FIN << 7) + e.Opcode);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), o]);
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
function encodeDataFrame(e) {
var s = [],
o = new Buffer(e.PayloadData),
l = o.length;
 
s.push((e.FIN << 7) + e.Opcode);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), o]);
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
function encodeDataFrame(e) {
var s = [],
o = new Buffer(e.PayloadData),
l = o.length;
 
s.push((e.FIN << 7) + e.Opcode);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), o]);
}

都是根据帧结构示意图上的去处理,在此间不细讲,小说首要在下局地,如若对那块感兴趣的话可以移动web技术研商所~

都是依据帧结构示意图上的去处理,在此处不细讲,小说主要在下部分,如若对那块感兴趣的话可以移动web技术研讨所~

都是根据帧结构示意图上的去处理,在那边不细讲,小说首要在下一些,如若对那块感兴趣的话可以运动web技术商讨所~

 

 

 

三、websocket传输图片和websocket语音聊天室

三、websocket传输图片和websocket语音聊天室

三、websocket传输图片和websocket语音聊天室

正片环节到了,那篇文章最关键的或者显得一下websocket的一些行使情状

正片环节到了,这篇小说最要害的依然显得一下websocket的局地利用情况

正片环节到了,那篇小说最着重的要么显得一下websocket的部分行使处境

1、传输图片

1、传输图片

1、传输图片

咱俩先商讨传输图片的步骤是怎么样,首先服务器收到到客户端请求,然后读取图片文件,将二进制数据转载给客户端,客户端怎么着处理?当然是选择FileReader对象了

咱俩先考虑传输图片的步骤是何许,首先服务器收到到客户端请求,然后读取图片文件,将二进制数据转载给客户端,客户端怎么着处理?当然是运用FileReader对象了

咱俩先思考传输图片的步骤是何许,首先服务器收到到客户端请求,然后读取图片文件,将二进制数据转载给客户端,客户端怎么样处理?当然是行使FileReader对象了

先给客户端代码

先给客户端代码

先给客户端代码

JavaScript

JavaScript

JavaScript

var ws = new WebSocket(“ws://xxx.xxx.xxx.xxx:8888”); ws.onopen =
function(){ console.log(“握手成功”); }; ws.onmessage = function(e) { var
reader = new File里德r(); reader.onload = function(event) { var
contents = event.target.result; var a = new Image(); a.src = contents;
document.body.appendChild(a); } reader.readAsDataURL(e.data); };

var ws = new WebSocket(“ws://xxx.xxx.xxx.xxx:8888”); ws.onopen =
function(){ console.log(“握手成功”); }; ws.onmessage = function(e) { var
reader = new FileReader(); reader.onload = function(event) { var
contents = event.target.result; var a = new Image(); a.src = contents;
document.body.appendChild(a); } reader.readAsDataURL(e.data); };

var ws = new WebSocket(“ws://xxx.xxx.xxx.xxx:8888”); ws.onopen =
function(){ console.log(“握手成功”); }; ws.onmessage = function(e) { var
reader = new FileReader(); reader.onload = function(event) { var
contents = event.target.result; var a = new Image(); a.src = contents;
document.body.appendChild(a); } reader.readAsDataURL(e.data); };

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
var ws = new WebSocket("ws://xxx.xxx.xxx.xxx:8888");
 
ws.onopen = function(){
    console.log("握手成功");
};
 
ws.onmessage = function(e) {
    var reader = new FileReader();
    reader.onload = function(event) {
        var contents = event.target.result;
        var a = new Image();
        a.src = contents;
        document.body.appendChild(a);
    }
    reader.readAsDataURL(e.data);
};
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
var ws = new WebSocket("ws://xxx.xxx.xxx.xxx:8888");
 
ws.onopen = function(){
    console.log("握手成功");
};
 
ws.onmessage = function(e) {
    var reader = new FileReader();
    reader.onload = function(event) {
        var contents = event.target.result;
        var a = new Image();
        a.src = contents;
        document.body.appendChild(a);
    }
    reader.readAsDataURL(e.data);
};
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
var ws = new WebSocket("ws://xxx.xxx.xxx.xxx:8888");
 
ws.onopen = function(){
    console.log("握手成功");
};
 
ws.onmessage = function(e) {
    var reader = new FileReader();
    reader.onload = function(event) {
        var contents = event.target.result;
        var a = new Image();
        a.src = contents;
        document.body.appendChild(a);
    }
    reader.readAsDataURL(e.data);
};

吸纳到新闻,然后readAsDataURL,间接将图纸base64添加到页面中

收下到音信,然后readAsDataURL,直接将图片base64添加到页面中

接收到信息,然后readAsDataURL,直接将图片base64添加到页面中

转到服务器端代码

转到服务器端代码

转到服务器端代码

JavaScript

JavaScript

JavaScript

fs.readdir(“skyland”, function(err, files) { if(err) { throw err; }
for(var i = 0; i < files.length; i++) { fs.readFile(‘skyland/’ +
files[i], function(err, data) { if(err) { throw err; }
o.write(encodeImgFrame(data)); }); } }); function encodeImgFrame(buf) {
var s = [], l = buf.length, ret = []; s.push((1 << 7) + 2);
if(l < 126) { s.push(l); } else if(l < 0x10000) { s.push(126,
(l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0, 0, 0,
(l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00)
>> 8, l&0xFF); } return Buffer.concat([new Buffer(s), buf]); }

fs.readdir(“skyland”, function(err, files) { if(err) { throw err; }
for(var i = 0; i < files.length; i++) { fs.readFile(‘skyland/’ +
files[i], function(err, data) { if(err) { throw err; }
o.write(encodeImgFrame(data)); }); } }); function encodeImgFrame(buf) {
var s = [], l = buf.length, ret = []; s.push((1 << 7) + 2);
if(l < 126) { s.push(l); } else if(l < 0x10000) { s.push(126,
(l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0, 0, 0,
(l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00)
>> 8, l&0xFF); } return Buffer.concat([new Buffer(s), buf]); }

fs.readdir(“skyland”, function(err, files) { if(err) { throw err; }
for(var i = 0; i < files.length; i++) { fs.readFile(‘skyland/’ +
files[i], function(err, data) { if(err) { throw err; }
o.write(encodeImgFrame(data)); }); } }); function encodeImgFrame(buf) {
var s = [], l = buf.length, ret = []; s.push((1 << 7) + 2);
if(l < 126) { s.push(l); } else if(l < 0x10000) { s.push(126,
(l&0xFF00) >> 8, l&0xFF); } else { s.push(127, 0, 0, 0, 0,
(l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00)
>> 8, l&0xFF); } return Buffer.concat([new Buffer(s), buf]); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
fs.readdir("skyland", function(err, files) {
if(err) {
throw err;
}
for(var i = 0; i < files.length; i++) {
fs.readFile(‘skyland/’ + files[i], function(err, data) {
if(err) {
throw err;
}
 
o.write(encodeImgFrame(data));
});
}
});
 
function encodeImgFrame(buf) {
var s = [],
l = buf.length,
ret = [];
 
s.push((1 << 7) + 2);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), buf]);
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
fs.readdir("skyland", function(err, files) {
if(err) {
throw err;
}
for(var i = 0; i < files.length; i++) {
fs.readFile(‘skyland/’ + files[i], function(err, data) {
if(err) {
throw err;
}
 
o.write(encodeImgFrame(data));
});
}
});
 
function encodeImgFrame(buf) {
var s = [],
l = buf.length,
ret = [];
 
s.push((1 << 7) + 2);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), buf]);
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
fs.readdir("skyland", function(err, files) {
if(err) {
throw err;
}
for(var i = 0; i < files.length; i++) {
fs.readFile(‘skyland/’ + files[i], function(err, data) {
if(err) {
throw err;
}
 
o.write(encodeImgFrame(data));
});
}
});
 
function encodeImgFrame(buf) {
var s = [],
l = buf.length,
ret = [];
 
s.push((1 << 7) + 2);
 
if(l < 126) {
s.push(l);
} else if(l < 0x10000) {
s.push(126, (l&0xFF00) >> 8, l&0xFF);
} else {
s.push(127, 0, 0, 0, 0, (l&0xFF000000) >> 24, (l&0xFF0000) >> 16, (l&0xFF00) >> 8, l&0xFF);
}
 
return Buffer.concat([new Buffer(s), buf]);
}

注意s.push((1 << 7) +
2)
这一句,那里格外直接把opcode写死了为2,对于Binary
Frame,那样客户端接收到数码是不会尝试举行toString的,否则会报错~

注意s.push((1 << 7) +
2)
这一句,那里非凡直接把opcode写死了为2,对于Binary
Frame,那样客户端接收到数码是不会尝试举行toString的,否则会报错~

注意s.push((1 << 7) +
2)
这一句,那里卓殊直接把opcode写死了为2,对于Binary
Frame,那样客户端接收到多少是不会尝试举办toString的,否则会报错~

代码很不难,在此地向大家大快朵颐一下websocket传输图片的快慢如何

代码很简短,在这边向我们大饱眼福一下websocket传输图片的速度怎么样

代码很粗略,在那里向我们享用一下websocket传输图片的快慢怎么着

测试很多张图纸,总共8.24M

测试很多张图纸,总共8.24M

测试很多张图纸,总共8.24M

普通静态资源服务器需求20s左右(服务器较远)

经常静态资源服务器需求20s左右(服务器较远)

常备静态资源服务器需求20s左右(服务器较远)

cdn需要2.8s左右

cdn需要2.8s左右

cdn需要2.8s左右

那大家的websocket格局啊??!

那大家的websocket格局呢??!

那大家的websocket情势啊??!

答案是一样必要20s左右,是还是不是很失望……速度就是慢在传输上,并不是服务器读取图片,本机上等同的图片资源,1s左右方可做到……那样看来数据流也无能为力冲破距离的范围进步传输速度

答案是同样须要20s左右,是还是不是很失望……速度就是慢在传输上,并不是服务器读取图片,本机上亦然的图形资源,1s左右可以成功……那样看来数据流也无能为力冲破距离的范围进步传输速度

答案是如出一辙需要20s左右,是或不是很失望……速度就是慢在传输上,并不是服务器读取图片,本机上一样的图样资源,1s左右足以形成……那样看来数据流也无能为力冲破距离的限定提升传输速度

上边大家来看看websocket的另一个用法~

上边大家来看望websocket的另一个用法~

下边大家来探望websocket的另一个用法~

 

 

 

用websocket搭建语音聊天室

用websocket搭建语音聊天室

用websocket搭建语音聊天室

先来打点一下语音聊天室的机能

先来整治一下语音聊天室的成效

先来打点一下语音聊天室的职能

用户进入频道随后从Mike风输入音频,然后发送给后台转载给频道里面的其余人,其外人接收到新闻进行播报

用户进入频道随后从迈克风输入音频,然后发送给后台转载给频道里面的其外人,其余人接收到新闻举行播放

用户进入频道随后从迈克风输入音频,然后发送给后台转载给频道里面的其外人,其外人接收到新闻进行播放

看起来困难在三个地点,第三个是节奏的输入,第二是吸纳到多少流进行广播

看起来困难在四个地点,第二个是节奏的输入,第二是收纳到数码流进行播放

看起来困难在多少个地方,第三个是音频的输入,第二是收到到数量流进行广播

先说音频的输入,那里运用了HTML5的getUserMedia方法,不过注意了,那个方法上线是有大坑的,最终说,先贴代码

先说音频的输入,那里运用了HTML5的getUserMedia方法,然而注意了,本条方法上线是有大坑的,最后说,先贴代码

先说音频的输入,这里运用了HTML5的getUserMedia方法,然而注意了,本条方式上线是有大坑的,最后说,先贴代码

JavaScript

JavaScript

JavaScript

if (navigator.getUserMedia) { navigator.getUserMedia( { audio: true },
function (stream) { var rec = new SRecorder(stream); recorder = rec; })
}

if (navigator.getUserMedia) { navigator.getUserMedia( { audio: true },
function (stream) { var rec = new SRecorder(stream); recorder = rec; })
}

if (navigator.getUserMedia) { navigator.getUserMedia( { audio: true },
function (stream) { var rec = new SRecorder(stream); recorder = rec; })
}

1
2
3
4
5
6
7
8
if (navigator.getUserMedia) {
    navigator.getUserMedia(
        { audio: true },
        function (stream) {
            var rec = new SRecorder(stream);
            recorder = rec;
        })
}
1
2
3
4
5
6
7
8
if (navigator.getUserMedia) {
    navigator.getUserMedia(
        { audio: true },
        function (stream) {
            var rec = new SRecorder(stream);
            recorder = rec;
        })
}
1
2
3
4
5
6
7
8
if (navigator.getUserMedia) {
    navigator.getUserMedia(
        { audio: true },
        function (stream) {
            var rec = new SRecorder(stream);
            recorder = rec;
        })
}

首先个参数是{audio:
true},只启用音频,然后创造了一个SRecorder对象,后续的操作基本上都在那么些目的上开展。此时即使代码运行在当地的话浏览器应该提醒您是不是启用Mike风输入,确定今后就启动了

首先个参数是{audio:
true},只启用音频,然后创制了一个SRecorder对象,后续的操作基本上都在这么些目标上拓展。此时只要代码运行在本地的话浏览器应该擢升您是否启用迈克风输入,确定今后就启动了

首先个参数是{audio:
true},只启用音频,然后创设了一个SRecorder对象,后续的操作基本上都在那几个目的上进展。此时只要代码运行在地面的话浏览器应该升迁您是不是启用迈克风输入,确定之后就开行了

接下去大家看下SRecorder构造函数是甚,给出首要的部分

接下去大家看下SRecorder构造函数是吗,给出主要的局部

接下去大家看下SRecorder构造函数是吗,给出首要的一对

JavaScript

JavaScript

JavaScript

var SRecorder = function(stream) { …… var context = new AudioContext();
var audioInput = context.createMediaStreamSource(stream); var recorder =
context.createScriptProcessor(4096, 1, 1); …… }

var SRecorder = function(stream) { …… var context = new AudioContext();
var audioInput = context.createMediaStreamSource(stream); var recorder =
context.createScriptProcessor(4096, 1, 1); …… }

var SRecorder = function(stream) { …… var context = new AudioContext();
var audioInput = context.createMediaStreamSource(stream); var recorder =
context.createScriptProcessor(4096, 1, 1); …… }

1
2
3
4
5
6
7
var SRecorder = function(stream) {
    ……
   var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
    ……
}
1
2
3
4
5
6
7
var SRecorder = function(stream) {
    ……
   var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
    ……
}
1
2
3
4
5
6
7
var SRecorder = function(stream) {
    ……
   var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
    ……
}

奥迪(Audi)oContext是一个旋律上下文对象,有做过声音过滤处理的校友应该清楚“一段音频到达扬声器举行播放往日,半路对其开展拦阻,于是大家就拿走了旋律数据了,这一个拦截工作是由window.奥迪(Audi)oContext来做的,大家所有对旋律的操作都基于那几个目标”,大家可以透过奥迪oContext创制差别的奥迪oNode节点,然后添加滤镜播放更加的声息

奥迪(Audi)oContext是一个节奏上下文对象,有做过声音过滤处理的同桌应该通晓“一段音频到达扬声器举办广播从前,半路对其进展阻挠,于是大家就取得了节奏数据了,这一个拦截工作是由window.奥迪(Audi)oContext来做的,大家富有对旋律的操作都根据那么些指标”,大家得以经过奥迪oContext创制不相同的奥迪oNode节点,然后添加滤镜播放特其他响声

奥迪oContext是一个旋律上下文对象,有做过声音过滤处理的同室应该知道“一段音频到达扬声器举行播放从前,半路对其进展阻挠,于是大家就得到了节奏数据了,这么些拦截工作是由window.奥迪(Audi)oContext来做的,大家所有对旋律的操作都依据这么些目的”,我们得以经过奥迪(Audi)oContext创造不相同的奥迪oNode节点,然后添加滤镜播放特其他音响

录音原理一样,大家也急需走奥迪(Audi)oContext,不过多了一步对Mike风音频输入的收纳上,而不是像以往处理音频一下用ajax请求音频的ArrayBuffer对象再decode,迈克风的承受要求用到createMediaStreamSource方法,注意这些参数就是getUserMedia方法第二个参数的参数

录音原理一样,大家也要求走奥迪(Audi)oContext,但是多了一步对Mike风音频输入的接受上,而不是像在此之前处理音频一下用ajax请求音频的ArrayBuffer对象再decode,迈克风的接受须要用到createMediaStreamSource方法,注意这些参数就是getUserMedia方法第四个参数的参数

录音原理一样,咱们也亟需走AudioContext,可是多了一步对迈克风音频输入的采用上,而不是像往常处理音频一下用ajax请求音频的ArrayBuffer对象再decode,Mike风的收受须要用到createMediaStreamSource方法,注意那些参数就是getUserMedia方法第一个参数的参数

再者说createScriptProcessor方法,它官方的解说是:

何况createScriptProcessor方法,它官方的表达是:

再则createScriptProcessor方法,它官方的解释是:

Creates a ScriptProcessorNode, which can be used for direct audio
processing via JavaScript.

Creates a ScriptProcessorNode, which can be used for direct audio
processing via JavaScript.

Creates a ScriptProcessorNode, which can be used for direct audio
processing via JavaScript.

——————

——————

——————

席卷下就是以此措施是使用JavaScript去处理音频采集操作

概括下就是以此方法是选择JavaScript去处理音频采集操作

概括下就是以此点子是使用JavaScript去处理音频采集操作

毕竟到点子采集了!胜利就在前方!

好不简单到点子采集了!胜利就在眼前!

终于到点子采集了!胜利就在后面!

接下去让大家把迈克风的输入和拍子采集相连起来

接下去让我们把话筒的输入和音频采集相连起来

接下去让我们把Mike风的输入和节奏采集相连起来

JavaScript

JavaScript

JavaScript

audioInput.connect(recorder); recorder.connect(context.destination);

audioInput.connect(recorder); recorder.connect(context.destination);

audioInput.connect(recorder); recorder.connect(context.destination);

1
2
audioInput.connect(recorder);
recorder.connect(context.destination);
1
2
audioInput.connect(recorder);
recorder.connect(context.destination);
1
2
audioInput.connect(recorder);
recorder.connect(context.destination);

context.destination官方解释如下

context.destination官方解释如下

context.destination官方表明如下

The destination property of
the AudioContext interface
returns
an AudioDestinationNoderepresenting
the final destination of all audio in the context.

The destination property of
the AudioContext interface
returns
an AudioDestinationNoderepresenting
the final destination of all audio in the context.

The destination property of
the AudioContext interface
returns
an AudioDestinationNoderepresenting
the final destination of all audio in the context.

——————

——————

——————

context.destination重临代表在条件中的音频的最终目标地。

context.destination重返代表在环境中的音频的终极目标地。

context.destination重返代表在环境中的音频的末梢目标地。

好,到了这儿,大家还索要一个监听音频采集的轩然大波

好,到了此时,大家还索要一个监听音频采集的轩然大波

好,到了那儿,大家还索要一个监听音频采集的轩然大波

JavaScript

JavaScript

JavaScript

recorder.onaudioprocess = function (e) {
audioData.input(e.inputBuffer.getChannelData(0)); }

recorder.onaudioprocess = function (e) {
audioData.input(e.inputBuffer.getChannelData(0)); }

recorder.onaudioprocess = function (e) {
audioData.input(e.inputBuffer.getChannelData(0)); }

1
2
3
recorder.onaudioprocess = function (e) {
    audioData.input(e.inputBuffer.getChannelData(0));
}
1
2
3
recorder.onaudioprocess = function (e) {
    audioData.input(e.inputBuffer.getChannelData(0));
}
1
2
3
recorder.onaudioprocess = function (e) {
    audioData.input(e.inputBuffer.getChannelData(0));
}

audioData是一个目的,那一个是在网上找的,我就加了一个clear方法因为背后会用到,首要有不行encodeWAV方法很赞,别人进行了往往的节奏压缩和优化,这么些最后会伴随完整的代码一起贴出来

audioData是一个对象,那一个是在网上找的,我就加了一个clear方法因为背后会用到,首要有万分encodeWAV方法很赞,外人进行了反复的韵律压缩和优化,那几个最后会陪伴完整的代码一起贴出来

audioData是一个目的,那些是在网上找的,我就加了一个clear方法因为前边会用到,主要有非凡encodeWAV方法很赞,旁人进行了累累的点子压缩和优化,那个最后会陪伴完整的代码一起贴出来

那会儿整整用户进入频道随后从Mike风输入音频环节就已经已毕啦,上面就该是向服务器端发送音频流,稍微有点蛋疼的来了,刚才大家说了,websocket通过opcode分裂可以表示回去的多寡是文本仍然二进制数据,而大家onaudioprocess中input进去的是数组,最后播放音响需求的是Blob,{type:
‘audio/wav’}的对象,那样我们就亟必要在发送往日将数组转换成WAV的Blob,此时就用到了上面说的encodeWAV方法

那会儿所有用户进入频道随后从迈克风输入音频环节就已经落成啦,下边就该是向服务器端发送音频流,稍微有点蛋疼的来了,刚才大家说了,websocket通过opcode分裂可以表示回去的数码是文本如故二进制数据,而我们onaudioprocess中input进去的是数组,最后播放音响需求的是Blob,{type:
‘audio/wav’}的对象,那样大家就非得要在发送从前将数组转换成WAV的Blob,此时就用到了上面说的encodeWAV方法

那会儿全部用户进入频道随后从迈克风输入音频环节就已经落成啦,上面就该是向服务器端发送音频流,稍微有点蛋疼的来了,刚才咱们说了,websocket通过opcode差距可以表示回去的数据是文本依然二进制数据,而我们onaudioprocess中input进去的是数组,最后播放音响必要的是Blob,{type:
‘audio/wav’}的对象,那样大家就必要求在发送此前将数组转换成WAV的Blob,此时就用到了上边说的encodeWAV方法

服务器似乎很粗略,只要转载就行了

服务器就如很简短,只要转载就行了

服务器如同很不难,只要转载就行了

本地测试确实可以,只是天坑来了!将次第跑在服务器上时候调用getUserMedia方法提醒我不可能不在一个有惊无险的条件,也就是索要https,那象征ws也非得换成wss……就此服务器代码就从未选取大家和好包装的拉手、解析和编码了,代码如下

地点测试确实可以,不过天坑来了!将顺序跑在服务器上时候调用getUserMedia方法提醒我不能不在一个安全的环境,也就是内需https,那意味ws也必须换成wss……因而服务器代码就从未有过使用大家友好包装的抓手、解析和编码了,代码如下

本土测试确实可以,而是天坑来了!将先后跑在服务器上时候调用getUserMedia方法提醒我必须在一个平安的条件,也就是亟需https,这代表ws也不可能不换成wss……故此服务器代码就平素不行使大家协调包装的握手、解析和编码了,代码如下

JavaScript

JavaScript

JavaScript

var https = require(‘https’); var fs = require(‘fs’); var ws =
require(‘ws’); var userMap = Object.create(null); var options = { key:
fs.readFileSync(‘./privatekey.pem’), cert:
fs.readFileSync(‘./certificate.pem’) }; var server =
https.createServer(options, function(req, res) { res.writeHead({
‘Content-Type’ : ‘text/html’ }); fs.readFile(‘./testaudio.html’,
function(err, data) { if(err) { return ; } res.end(data); }); }); var
wss = new ws.Server({server: server}); wss.on(‘connection’, function(o)
{ o.on(‘message’, function(message) { if(message.indexOf(‘user’) === 0)
{ var user = message.split(‘:’)[1]; userMap[user] = o; } else {
for(var u in userMap) { userMap[u].send(message); } } }); });
server.listen(8888);

var https = require(‘https’); var fs = require(‘fs’); var ws =
require(‘ws’); var userMap = Object.create(null); var options = { key:
fs.readFileSync(‘./privatekey.pem’), cert:
fs.readFileSync(‘./certificate.pem’) }; var server =
https.createServer(options, function(req, res) { res.writeHead({
‘Content-Type’ : ‘text/html’ }); fs.readFile(‘./testaudio.html’,
function(err, data) { if(err) { return ; } res.end(data); }); }); var
wss = new ws.Server({server: server}); wss.on(‘connection’, function(o)
{ o.on(‘message’, function(message) { if(message.indexOf(‘user’) === 0)
{ var user = message.split(‘:’)[1]; userMap[user] = o; } else {
for(var u in userMap) { userMap[u].send(message); } } }); });
server.listen(8888);

var https = require(‘https’); var fs = require(‘fs’); var ws =
require(‘ws’); var userMap = Object.create(null); var options = { key:
fs.readFileSync(‘./privatekey.pem’), cert:
fs.readFileSync(‘./certificate.pem’) }; var server =
https.createServer(options, function(req, res) { res.writeHead({
‘Content-Type’ : ‘text/html’ }); fs.readFile(‘./testaudio.html’,
function(err, data) { if(err) { return ; } res.end(data); }); }); var
wss = new ws.Server({server: server}); wss.on(‘connection’, function(o)
{ o.on(‘message’, function(message) { if(message.indexOf(‘user’) === 0)
{ var user = message.split(‘:’)[1]; userMap[user] = o; } else {
for(var u in userMap) { userMap[u].send(message); } } }); });
server.listen(8888);

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
var https = require(‘https’);
var fs = require(‘fs’);
var ws = require(‘ws’);
var userMap = Object.create(null);
var options = {
    key: fs.readFileSync(‘./privatekey.pem’),
    cert: fs.readFileSync(‘./certificate.pem’)
};
var server = https.createServer(options, function(req, res) {
    res.writeHead({
        ‘Content-Type’ : ‘text/html’
    });
 
    fs.readFile(‘./testaudio.html’, function(err, data) {
        if(err) {
            return ;
        }
 
        res.end(data);
    });
});
 
var wss = new ws.Server({server: server});
 
wss.on(‘connection’, function(o) {
    o.on(‘message’, function(message) {
if(message.indexOf(‘user’) === 0) {
    var user = message.split(‘:’)[1];
    userMap[user] = o;
} else {
    for(var u in userMap) {
userMap[u].send(message);
    }
}
    });
});
 
server.listen(8888);
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
var https = require(‘https’);
var fs = require(‘fs’);
var ws = require(‘ws’);
var userMap = Object.create(null);
var options = {
    key: fs.readFileSync(‘./privatekey.pem’),
    cert: fs.readFileSync(‘./certificate.pem’)
};
var server = https.createServer(options, function(req, res) {
    res.writeHead({
        ‘Content-Type’ : ‘text/html’
    });
 
    fs.readFile(‘./testaudio.html’, function(err, data) {
        if(err) {
            return ;
        }
 
        res.end(data);
    });
});
 
var wss = new ws.Server({server: server});
 
wss.on(‘connection’, function(o) {
    o.on(‘message’, function(message) {
if(message.indexOf(‘user’) === 0) {
    var user = message.split(‘:’)[1];
    userMap[user] = o;
} else {
    for(var u in userMap) {
userMap[u].send(message);
    }
}
    });
});
 
server.listen(8888);
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
var https = require(‘https’);
var fs = require(‘fs’);
var ws = require(‘ws’);
var userMap = Object.create(null);
var options = {
    key: fs.readFileSync(‘./privatekey.pem’),
    cert: fs.readFileSync(‘./certificate.pem’)
};
var server = https.createServer(options, function(req, res) {
    res.writeHead({
        ‘Content-Type’ : ‘text/html’
    });
 
    fs.readFile(‘./testaudio.html’, function(err, data) {
        if(err) {
            return ;
        }
 
        res.end(data);
    });
});
 
var wss = new ws.Server({server: server});
 
wss.on(‘connection’, function(o) {
    o.on(‘message’, function(message) {
if(message.indexOf(‘user’) === 0) {
    var user = message.split(‘:’)[1];
    userMap[user] = o;
} else {
    for(var u in userMap) {
userMap[u].send(message);
    }
}
    });
});
 
server.listen(8888);

代码依然很简单的,使用https模块,然后用了启幕说的ws模块,userMap是模拟的频段,只兑现转载的基本功用

代码照旧很简单的,使用https模块,然后用了初叶说的ws模块,userMap是模拟的频道,只兑现转载的主干职能

代码照旧很粗略的,使用https模块,然后用了起来说的ws模块,userMap是仿照的频道,只兑现转载的为主职能

应用ws模块是因为它非凡https达成wss实在是太便宜了,和逻辑代码0争辨

行使ws模块是因为它至极https完结wss实在是太有利了,和逻辑代码0争执

使用ws模块是因为它分外https完毕wss实在是太有利了,和逻辑代码0争论

https的搭建在此间就不提了,紧如果内需私钥、CSR证书签名和证件文件,感兴趣的同室可以精晓下(但是不打听的话在现网环境也用持续getUserMedia……)

https的搭建在那里就不提了,首假设索要私钥、CSR证书签名和评释文件,感兴趣的同室可以了解下(可是不精晓的话在现网环境也用持续getUserMedia……)

https的搭建在此间就不提了,首倘若急需私钥、CSR证书签名和证件文件,感兴趣的同室可以明白下(然而不打听的话在现网环境也用持续getUserMedia……)

下边是全体的前端代码

上面是共同体的前端代码

下边是完整的前端代码

JavaScript

JavaScript

JavaScript

var a = document.getElementById(‘a’); var b =
document.getElementById(‘b’); var c = document.getElementById(‘c’);
navigator.getUserMedia = navigator.getUserMedia ||
navigator.webkitGetUserMedia; var gRecorder = null; var audio =
document.querySelector(‘audio’); var door = false; var ws = null;
b.onclick = function() { if(a.value === ”) { alert(‘请输入用户名’);
return false; } if(!navigator.getUserMedia) {
alert(‘抱歉您的装置无罗马尼亚(Romania)语音聊天’); return false; }
SRecorder.get(function (rec) { gRecorder = rec; }); ws = new
WebSocket(“wss://x.x.x.x:8888”); ws.onopen = function() {
console.log(‘握手成功’); ws.send(‘user:’ + a.value); }; ws.onmessage =
function(e) { receive(e.data); }; document.onkeydown = function(e) {
if(e.keyCode === 65) { if(!door) { gRecorder.start(); door = true; } }
}; document.onkeyup = function(e) { if(e.keyCode === 65) { if(door) {
ws.send(gRecorder.getBlob()); gRecorder.clear(); gRecorder.stop(); door
= false; } } } } c.onclick = function() { if(ws) { ws.close(); } } var
SRecorder = function(stream) { config = {}; config.sampleBits =
config.smapleBits || 8; config.sampleRate = config.sampleRate || (44100
/ 6); var context = new 奥迪(Audi)oContext(); var audioInput =
context.createMediaStreamSource(stream); var recorder =
context.createScriptProcessor(4096, 1, 1); var audioData = { size: 0
//录音文件长度 , buffer: [] //录音缓存 , inputSampleRate:
context.sampleRate //输入采样率 , input萨姆pleBits: 16 //输入采样数位 8,
16 , outputSampleRate: config.sampleRate //输出采样率 , oututSampleBits:
config.sampleBits //输出采样数位 8, 16 , clear: function() { this.buffer
= []; this.size = 0; } , input: function (data) { this.buffer.push(new
Float32Array(data)); this.size += data.length; } , compress: function ()
{ //合并压缩 //合并 var data = new Float32Array(this.size); var offset =
0; for (var i = 0; i < this.buffer.length; i++) {
data.set(this.buffer[i], offset); offset += this.buffer[i].length; }
//压缩 var compression = parseInt(this.inputSampleRate /
this.outputSampleRate); var length = data.length / compression; var
result = new Float32Array(length); var index = 0, j = 0; while (index
< length) { result[index] = data[j]; j += compression; index++; }
return result; } , encodeWAV: function () { var sampleRate =
Math.min(this.inputSampleRate, this.outputSampleRate); var sampleBits =
Math.min(this.inputSampleBits, this.oututSampleBits); var bytes =
this.compress(); var dataLength = bytes.length * (sampleBits / 8); var
buffer = new ArrayBuffer(44 + dataLength); var data = new
DataView(buffer); var channelCount = 1;//单声道 var offset = 0; var
writeString = function (str) { for (var i = 0; i < str.length; i++) {
data.setUint8(offset + i, str.charCodeAt(i)); } }; // 资源沟通文件标识符
writeString(‘RIFF’); offset += 4; //
下个地点开始到文件尾总字节数,即文件大小-8 data.setUint32(offset, 36 +
dataLength, true); offset += 4; // WAV文件声明 writeString(‘WAVE’);
offset += 4; // 波形格式标志 writeString(‘fmt ‘); offset += 4; //
过滤字节,一般为 0x10 = 16 data.setUint32(offset, 16, true); offset += 4;
// 格式序列 (PCM格局采样数据) data.setUint16(offset, 1, true); offset +=
2; // 通道数 data.setUint16(offset, channelCount, true); offset += 2; //
采样率,每秒样本数,表示每个通道的播放速度 data.setUint32(offset,
sampleRate, true); offset += 4; // 波形数据传输率 (每秒平均字节数)
单声道×每秒数据位数×每样本数据位/8 data.setUint32(offset, channelCount
* sampleRate * (sampleBits / 8), true); offset += 4; // 快数据调整数
采样三遍占用字节数 单声道×每样本的数额位数/8 data.setUint16(offset,
channelCount * (sampleBits / 8), true); offset += 2; // 每样本数量位数
data.setUint16(offset, sampleBits, true); offset += 2; // 数据标识符
writeString(‘data’); offset += 4; // 采样数据总数,即数据总大小-44
data.setUint32(offset, dataLength, true); offset += 4; // 写入采样数据
if (sampleBits === 8) { for (var i = 0; i < bytes.length; i++,
offset++) { var s = Math.max(-1, Math.min(1, bytes[i])); var val = s
< 0 ? s * 0x8000 : s * 0x7FFF; val = parseInt(255 / (65535 / (val +
32768))); data.setInt8(offset, val, true); } } else { for (var i = 0; i
< bytes.length; i++, offset += 2) { var s = Math.max(-1, Math.min(1,
bytes[i])); data.setInt16(offset, s < 0 ? s * 0x8000 : s *
0x7FFF, true); } } return new Blob([data], { type: ‘audio/wav’ }); }
}; this.start = function () { audioInput.connect(recorder);
recorder.connect(context.destination); } this.stop = function () {
recorder.disconnect(); } this.getBlob = function () { return
audioData.encodeWAV(); } this.clear = function() { audioData.clear(); }
recorder.onaudioprocess = function (e) {
audioData.input(e.inputBuffer.getChannelData(0)); } }; SRecorder.get =
function (callback) { if (callback) { if (navigator.getUserMedia) {
navigator.getUserMedia( { audio: true }, function (stream) { var rec =
new SRecorder(stream); callback(rec); }) } } } function receive(e) {
audio.src = window.URL.createObjectURL(e); }

var a = document.getElementById(‘a’); var b =
document.getElementById(‘b’); var c = document.getElementById(‘c’);
navigator.getUserMedia = navigator.getUserMedia ||
navigator.webkitGetUserMedia; var gRecorder = null; var audio =
document.querySelector(‘audio’); var door = false; var ws = null;
b.onclick = function() { if(a.value === ”) { alert(‘请输入用户名’);
return false; } if(!navigator.getUserMedia) {
alert(‘抱歉您的配备无斯洛伐克(Slovak)语音聊天’); return false; }
SRecorder.get(function (rec) { gRecorder = rec; }); ws = new
WebSocket(“wss://x.x.x.x:8888”); ws.onopen = function() {
console.log(‘握手成功’); ws.send(‘user:’ + a.value); }; ws.onmessage =
function(e) { receive(e.data); }; document.onkeydown = function(e) {
if(e.keyCode === 65) { if(!door) { gRecorder.start(); door = true; } }
}; document.onkeyup = function(e) { if(e.keyCode === 65) { if(door) {
ws.send(gRecorder.getBlob()); gRecorder.clear(); gRecorder.stop(); door
= false; } } } } c.onclick = function() { if(ws) { ws.close(); } } var
SRecorder = function(stream) { config = {}; config.sampleBits =
config.smapleBits || 8; config.sampleRate = config.sampleRate || (44100
/ 6); var context = new 奥迪(Audi)oContext(); var audioInput =
context.createMediaStreamSource(stream); var recorder =
context.createScriptProcessor(4096, 1, 1); var audioData = { size: 0
//录音文件长度 , buffer: [] //录音缓存 , inputSampleRate:
context.sampleRate //输入采样率 , input萨姆pleBits: 16 //输入采样数位 8,
16 , outputSampleRate: config.sampleRate //输出采样率 , oututSampleBits:
config.sampleBits //输出采样数位 8, 16 , clear: function() { this.buffer
= []; this.size = 0; } , input: function (data) { this.buffer.push(new
Float32Array(data)); this.size += data.length; } , compress: function ()
{ //合并压缩 //合并 var data = new Float32Array(this.size); var offset =
0; for (var i = 0; i < this.buffer.length; i++) {
data.set(this.buffer[i], offset); offset += this.buffer[i].length; }
//压缩 var compression = parseInt(this.inputSampleRate /
this.outputSampleRate); var length = data.length / compression; var
result = new Float32Array(length); var index = 0, j = 0; while (index
< length) { result[index] = data[j]; j += compression; index++; }
return result; } , encodeWAV: function () { var sampleRate =
Math.min(this.inputSampleRate, this.outputSampleRate); var sampleBits =
Math.min(this.inputSampleBits, this.oututSampleBits); var bytes =
this.compress(); var dataLength = bytes.length * (sampleBits / 8); var
buffer = new ArrayBuffer(44 + dataLength); var data = new
DataView(buffer); var channelCount = 1;//单声道 var offset = 0; var
writeString = function (str) { for (var i = 0; i < str.length; i++) {
data.setUint8(offset + i, str.charCodeAt(i)); } }; // 资源交流文件标识符
writeString(‘RIFF’); offset += 4; //
下个地点开端到文件尾总字节数,即文件大小-8 data.setUint32(offset, 36 +
dataLength, true); offset += 4; // WAV文件声明 writeString(‘WAVE’);
offset += 4; // 波形格式标志 writeString(‘fmt ‘); offset += 4; //
过滤字节,一般为 0x10 = 16 data.setUint32(offset, 16, true); offset += 4;
// 格式系列 (PCM方式采样数据) data.setUint16(offset, 1, true); offset +=
2; // 通道数 data.setUint16(offset, channelCount, true); offset += 2; //
采样率,每秒样本数,表示每个通道的播音速度 data.setUint32(offset,
sampleRate, true); offset += 4; // 波形数据传输率 (每秒平均字节数)
单声道×每秒数据位数×每样本数据位/8 data.setUint32(offset, channelCount
* sampleRate * (sampleBits / 8), true); offset += 4; // 快数据调整数
采样一遍占用字节数 单声道×每样本的数目位数/8 data.setUint16(offset,
channelCount * (sampleBits / 8), true); offset += 2; // 每样本数量位数
data.setUint16(offset, sampleBits, true); offset += 2; // 数据标识符
writeString(‘data’); offset += 4; // 采样数据总数,即数据总大小-44
data.setUint32(offset, dataLength, true); offset += 4; // 写入采样数据
if (sampleBits === 8) { for (var i = 0; i < bytes.length; i++,
offset++) { var s = Math.max(-1, Math.min(1, bytes[i])); var val = s
< 0 ? s * 0x8000 : s * 0x7FFF; val = parseInt(255 / (65535 / (val +
32768))); data.setInt8(offset, val, true); } } else { for (var i = 0; i
< bytes.length; i++, offset += 2) { var s = Math.max(-1, Math.min(1,
bytes[i])); data.setInt16(offset, s < 0 ? s * 0x8000 : s *
0x7FFF, true); } } return new Blob([data], { type: ‘audio/wav’ }); }
}; this.start = function () { audioInput.connect(recorder);
recorder.connect(context.destination); } this.stop = function () {
recorder.disconnect(); } this.getBlob = function () { return
audioData.encodeWAV(); } this.clear = function() { audioData.clear(); }
recorder.onaudioprocess = function (e) {
audioData.input(e.inputBuffer.getChannelData(0)); } }; SRecorder.get =
function (callback) { if (callback) { if (navigator.getUserMedia) {
navigator.getUserMedia( { audio: true }, function (stream) { var rec =
new SRecorder(stream); callback(rec); }) } } } function receive(e) {
audio.src = window.URL.createObjectURL(e); }

var a = document.getElementById(‘a’); var b =
document.getElementById(‘b’); var c = document.getElementById(‘c’);
navigator.getUserMedia = navigator.getUserMedia ||
navigator.webkitGetUserMedia; var gRecorder = null; var audio =
document.querySelector(‘audio’); var door = false; var ws = null;
b.onclick = function() { if(a.value === ”) { alert(‘请输入用户名’);
return false; } if(!navigator.getUserMedia) {
alert(‘抱歉您的装置无爱尔兰语音聊天’); return false; }
SRecorder.get(function (rec) { gRecorder = rec; }); ws = new
WebSocket(“wss://x.x.x.x:8888”); ws.onopen = function() {
console.log(‘握手成功’); ws.send(‘user:’ + a.value); }; ws.onmessage =
function(e) { receive(e.data); }; document.onkeydown = function(e) {
if(e.keyCode === 65) { if(!door) { gRecorder.start(); door = true; } }
}; document.onkeyup = function(e) { if(e.keyCode === 65) { if(door) {
ws.send(gRecorder.getBlob()); gRecorder.clear(); gRecorder.stop(); door
= false; } } } } c.onclick = function() { if(ws) { ws.close(); } } var
SRecorder = function(stream) { config = {}; config.sampleBits =
config.smapleBits || 8; config.sampleRate = config.sampleRate || (44100
/ 6); var context = new 奥迪(Audi)oContext(); var audioInput =
context.createMediaStreamSource(stream); var recorder =
context.createScriptProcessor(4096, 1, 1); var audioData = { size: 0
//录音文件长度 , buffer: [] //录音缓存 , inputSampleRate:
context.sampleRate //输入采样率 , inputSampleBits: 16 //输入采样数位 8,
16 , output萨姆pleRate: config.sampleRate //输出采样率 , oututSampleBits:
config.sampleBits //输出采样数位 8, 16 , clear: function() { this.buffer
= []; this.size = 0; } , input: function (data) { this.buffer.push(new
Float32Array(data)); this.size += data.length; } , compress: function ()
{ //合并压缩 //合并 var data = new Float32Array(this.size); var offset =
0; for (var i = 0; i < this.buffer.length; i++) {
data.set(this.buffer[i], offset); offset += this.buffer[i].length; }
//压缩 var compression = parseInt(this.inputSampleRate /
this.outputSampleRate); var length = data.length / compression; var
result = new Float32Array(length); var index = 0, j = 0; while (index
< length) { result[index] = data[j]; j += compression; index++; }
return result; } , encodeWAV: function () { var sampleRate =
Math.min(this.inputSampleRate, this.outputSampleRate); var sampleBits =
Math.min(this.inputSampleBits, this.oututSampleBits); var bytes =
this.compress(); var dataLength = bytes.length * (sampleBits / 8); var
buffer = new ArrayBuffer(44 + dataLength); var data = new
DataView(buffer); var channelCount = 1;//单声道 var offset = 0; var
writeString = function (str) { for (var i = 0; i < str.length; i++) {
data.setUint8(offset + i, str.charCodeAt(i)); } }; // 资源调换文件标识符
writeString(‘RIFF’); offset += 4; //
下个地方初阶到文件尾总字节数,即文件大小-8 data.setUint32(offset, 36 +
dataLength, true); offset += 4; // WAV文件申明 writeString(‘WAVE’);
offset += 4; // 波形格式标志 writeString(‘fmt ‘); offset += 4; //
过滤字节,一般为 0x10 = 16 data.setUint32(offset, 16, true); offset += 4;
// 格式系列 (PCM形式采样数据) data.setUint16(offset, 1, true); offset +=
2; // 通道数 data.setUint16(offset, channelCount, true); offset += 2; //
采样率,每秒样本数,表示每个通道的播报速度 data.setUint32(offset,
sampleRate, true); offset += 4; // 波形数据传输率 (每秒平均字节数)
单声道×每秒数据位数×每样本数据位/8 data.setUint32(offset, channelCount
* sampleRate * (sampleBits / 8), true); offset += 4; // 快数据调整数
采样两遍占用字节数 单声道×每样本的多寡位数/8 data.setUint16(offset,
channelCount * (sampleBits / 8), true); offset += 2; // 每样本数量位数
data.setUint16(offset, sampleBits, true); offset += 2; // 数据标识符
writeString(‘data’); offset += 4; // 采样数据总数,即数据总大小-44
data.setUint32(offset, dataLength, true); offset += 4; // 写入采样数据
if (sampleBits === 8) { for (var i = 0; i < bytes.length; i++,
offset++) { var s = Math.max(-1, Math.min(1, bytes[i])); var val = s
< 0 ? s * 0x8000 : s * 0x7FFF; val = parseInt(255 / (65535 / (val +
32768))); data.setInt8(offset, val, true); } } else { for (var i = 0; i
< bytes.length; i++, offset += 2) { var s = Math.max(-1, Math.min(1,
bytes[i])); data.setInt16(offset, s < 0 ? s * 0x8000 : s *
0x7FFF, true); } } return new Blob([data], { type: ‘audio/wav’ }); }
}; this.start = function () { audioInput.connect(recorder);
recorder.connect(context.destination); } this.stop = function () {
recorder.disconnect(); } this.getBlob = function () { return
audioData.encodeWAV(); } this.clear = function() { audioData.clear(); }
recorder.onaudioprocess = function (e) {
audioData.input(e.inputBuffer.getChannelData(0)); } }; SRecorder.get =
function (callback) { if (callback) { if (navigator.getUserMedia) {
navigator.getUserMedia( { audio: true }, function (stream) { var rec =
new SRecorder(stream); callback(rec); }) } } } function receive(e) {
audio.src = window.URL.createObjectURL(e); }

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
var a = document.getElementById(‘a’);
var b = document.getElementById(‘b’);
var c = document.getElementById(‘c’);
 
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia;
 
var gRecorder = null;
var audio = document.querySelector(‘audio’);
var door = false;
var ws = null;
 
b.onclick = function() {
    if(a.value === ”) {
        alert(‘请输入用户名’);
        return false;
    }
    if(!navigator.getUserMedia) {
        alert(‘抱歉您的设备无法语音聊天’);
        return false;
    }
 
    SRecorder.get(function (rec) {
        gRecorder = rec;
    });
 
    ws = new WebSocket("wss://x.x.x.x:8888");
 
    ws.onopen = function() {
        console.log(‘握手成功’);
        ws.send(‘user:’ + a.value);
    };
 
    ws.onmessage = function(e) {
        receive(e.data);
    };
 
    document.onkeydown = function(e) {
        if(e.keyCode === 65) {
            if(!door) {
                gRecorder.start();
                door = true;
            }
        }
    };
 
    document.onkeyup = function(e) {
        if(e.keyCode === 65) {
            if(door) {
                ws.send(gRecorder.getBlob());
                gRecorder.clear();
                gRecorder.stop();
                door = false;
            }
        }
    }
}
 
c.onclick = function() {
    if(ws) {
        ws.close();
    }
}
 
var SRecorder = function(stream) {
    config = {};
 
    config.sampleBits = config.smapleBits || 8;
    config.sampleRate = config.sampleRate || (44100 / 6);
 
    var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
 
    var audioData = {
        size: 0          //录音文件长度
        , buffer: []     //录音缓存
        , inputSampleRate: context.sampleRate    //输入采样率
        , inputSampleBits: 16       //输入采样数位 8, 16
        , outputSampleRate: config.sampleRate    //输出采样率
        , oututSampleBits: config.sampleBits       //输出采样数位 8, 16
        , clear: function() {
            this.buffer = [];
            this.size = 0;
        }
        , input: function (data) {
            this.buffer.push(new Float32Array(data));
            this.size += data.length;
        }
        , compress: function () { //合并压缩
            //合并
            var data = new Float32Array(this.size);
            var offset = 0;
            for (var i = 0; i < this.buffer.length; i++) {
                data.set(this.buffer[i], offset);
                offset += this.buffer[i].length;
            }
            //压缩
            var compression = parseInt(this.inputSampleRate / this.outputSampleRate);
            var length = data.length / compression;
            var result = new Float32Array(length);
            var index = 0, j = 0;
            while (index < length) {
                result[index] = data[j];
                j += compression;
                index++;
            }
            return result;
        }
        , encodeWAV: function () {
            var sampleRate = Math.min(this.inputSampleRate, this.outputSampleRate);
            var sampleBits = Math.min(this.inputSampleBits, this.oututSampleBits);
            var bytes = this.compress();
            var dataLength = bytes.length * (sampleBits / 8);
            var buffer = new ArrayBuffer(44 + dataLength);
            var data = new DataView(buffer);
 
            var channelCount = 1;//单声道
            var offset = 0;
 
            var writeString = function (str) {
                for (var i = 0; i < str.length; i++) {
                    data.setUint8(offset + i, str.charCodeAt(i));
                }
            };
 
            // 资源交换文件标识符
            writeString(‘RIFF’); offset += 4;
            // 下个地址开始到文件尾总字节数,即文件大小-8
            data.setUint32(offset, 36 + dataLength, true); offset += 4;
            // WAV文件标志
            writeString(‘WAVE’); offset += 4;
            // 波形格式标志
            writeString(‘fmt ‘); offset += 4;
            // 过滤字节,一般为 0x10 = 16
            data.setUint32(offset, 16, true); offset += 4;
            // 格式类别 (PCM形式采样数据)
            data.setUint16(offset, 1, true); offset += 2;
            // 通道数
            data.setUint16(offset, channelCount, true); offset += 2;
            // 采样率,每秒样本数,表示每个通道的播放速度
            data.setUint32(offset, sampleRate, true); offset += 4;
            // 波形数据传输率 (每秒平均字节数) 单声道×每秒数据位数×每样本数据位/8
            data.setUint32(offset, channelCount * sampleRate * (sampleBits / 8), true); offset += 4;
            // 快数据调整数 采样一次占用字节数 单声道×每样本的数据位数/8
            data.setUint16(offset, channelCount * (sampleBits / 8), true); offset += 2;
            // 每样本数据位数
            data.setUint16(offset, sampleBits, true); offset += 2;
            // 数据标识符
            writeString(‘data’); offset += 4;
            // 采样数据总数,即数据总大小-44
            data.setUint32(offset, dataLength, true); offset += 4;
            // 写入采样数据
            if (sampleBits === 8) {
                for (var i = 0; i < bytes.length; i++, offset++) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    var val = s < 0 ? s * 0x8000 : s * 0x7FFF;
                    val = parseInt(255 / (65535 / (val + 32768)));
                    data.setInt8(offset, val, true);
                }
            } else {
                for (var i = 0; i < bytes.length; i++, offset += 2) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    data.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true);
                }
            }
 
            return new Blob([data], { type: ‘audio/wav’ });
        }
    };
 
    this.start = function () {
        audioInput.connect(recorder);
        recorder.connect(context.destination);
    }
 
    this.stop = function () {
        recorder.disconnect();
    }
 
    this.getBlob = function () {
        return audioData.encodeWAV();
    }
 
    this.clear = function() {
        audioData.clear();
    }
 
    recorder.onaudioprocess = function (e) {
        audioData.input(e.inputBuffer.getChannelData(0));
    }
};
 
SRecorder.get = function (callback) {
    if (callback) {
        if (navigator.getUserMedia) {
            navigator.getUserMedia(
                { audio: true },
                function (stream) {
                    var rec = new SRecorder(stream);
                    callback(rec);
                })
        }
    }
}
 
function receive(e) {
    audio.src = window.URL.createObjectURL(e);
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
var a = document.getElementById(‘a’);
var b = document.getElementById(‘b’);
var c = document.getElementById(‘c’);
 
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia;
 
var gRecorder = null;
var audio = document.querySelector(‘audio’);
var door = false;
var ws = null;
 
b.onclick = function() {
    if(a.value === ”) {
        alert(‘请输入用户名’);
        return false;
    }
    if(!navigator.getUserMedia) {
        alert(‘抱歉您的设备无法语音聊天’);
        return false;
    }
 
    SRecorder.get(function (rec) {
        gRecorder = rec;
    });
 
    ws = new WebSocket("wss://x.x.x.x:8888");
 
    ws.onopen = function() {
        console.log(‘握手成功’);
        ws.send(‘user:’ + a.value);
    };
 
    ws.onmessage = function(e) {
        receive(e.data);
    };
 
    document.onkeydown = function(e) {
        if(e.keyCode === 65) {
            if(!door) {
                gRecorder.start();
                door = true;
            }
        }
    };
 
    document.onkeyup = function(e) {
        if(e.keyCode === 65) {
            if(door) {
                ws.send(gRecorder.getBlob());
                gRecorder.clear();
                gRecorder.stop();
                door = false;
            }
        }
    }
}
 
c.onclick = function() {
    if(ws) {
        ws.close();
    }
}
 
var SRecorder = function(stream) {
    config = {};
 
    config.sampleBits = config.smapleBits || 8;
    config.sampleRate = config.sampleRate || (44100 / 6);
 
    var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
 
    var audioData = {
        size: 0          //录音文件长度
        , buffer: []     //录音缓存
        , inputSampleRate: context.sampleRate    //输入采样率
        , inputSampleBits: 16       //输入采样数位 8, 16
        , outputSampleRate: config.sampleRate    //输出采样率
        , oututSampleBits: config.sampleBits       //输出采样数位 8, 16
        , clear: function() {
            this.buffer = [];
            this.size = 0;
        }
        , input: function (data) {
            this.buffer.push(new Float32Array(data));
            this.size += data.length;
        }
        , compress: function () { //合并压缩
            //合并
            var data = new Float32Array(this.size);
            var offset = 0;
            for (var i = 0; i < this.buffer.length; i++) {
                data.set(this.buffer[i], offset);
                offset += this.buffer[i].length;
            }
            //压缩
            var compression = parseInt(this.inputSampleRate / this.outputSampleRate);
            var length = data.length / compression;
            var result = new Float32Array(length);
            var index = 0, j = 0;
            while (index < length) {
                result[index] = data[j];
                j += compression;
                index++;
            }
            return result;
        }
        , encodeWAV: function () {
            var sampleRate = Math.min(this.inputSampleRate, this.outputSampleRate);
            var sampleBits = Math.min(this.inputSampleBits, this.oututSampleBits);
            var bytes = this.compress();
            var dataLength = bytes.length * (sampleBits / 8);
            var buffer = new ArrayBuffer(44 + dataLength);
            var data = new DataView(buffer);
 
            var channelCount = 1;//单声道
            var offset = 0;
 
            var writeString = function (str) {
                for (var i = 0; i < str.length; i++) {
                    data.setUint8(offset + i, str.charCodeAt(i));
                }
            };
 
            // 资源交换文件标识符
            writeString(‘RIFF’); offset += 4;
            // 下个地址开始到文件尾总字节数,即文件大小-8
            data.setUint32(offset, 36 + dataLength, true); offset += 4;
            // WAV文件标志
            writeString(‘WAVE’); offset += 4;
            // 波形格式标志
            writeString(‘fmt ‘); offset += 4;
            // 过滤字节,一般为 0x10 = 16
            data.setUint32(offset, 16, true); offset += 4;
            // 格式类别 (PCM形式采样数据)
            data.setUint16(offset, 1, true); offset += 2;
            // 通道数
            data.setUint16(offset, channelCount, true); offset += 2;
            // 采样率,每秒样本数,表示每个通道的播放速度
            data.setUint32(offset, sampleRate, true); offset += 4;
            // 波形数据传输率 (每秒平均字节数) 单声道×每秒数据位数×每样本数据位/8
            data.setUint32(offset, channelCount * sampleRate * (sampleBits / 8), true); offset += 4;
            // 快数据调整数 采样一次占用字节数 单声道×每样本的数据位数/8
            data.setUint16(offset, channelCount * (sampleBits / 8), true); offset += 2;
            // 每样本数据位数
            data.setUint16(offset, sampleBits, true); offset += 2;
            // 数据标识符
            writeString(‘data’); offset += 4;
            // 采样数据总数,即数据总大小-44
            data.setUint32(offset, dataLength, true); offset += 4;
            // 写入采样数据
            if (sampleBits === 8) {
                for (var i = 0; i < bytes.length; i++, offset++) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    var val = s < 0 ? s * 0x8000 : s * 0x7FFF;
                    val = parseInt(255 / (65535 / (val + 32768)));
                    data.setInt8(offset, val, true);
                }
            } else {
                for (var i = 0; i < bytes.length; i++, offset += 2) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    data.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true);
                }
            }
 
            return new Blob([data], { type: ‘audio/wav’ });
        }
    };
 
    this.start = function () {
        audioInput.connect(recorder);
        recorder.connect(context.destination);
    }
 
    this.stop = function () {
        recorder.disconnect();
    }
 
    this.getBlob = function () {
        return audioData.encodeWAV();
    }
 
    this.clear = function() {
        audioData.clear();
    }
 
    recorder.onaudioprocess = function (e) {
        audioData.input(e.inputBuffer.getChannelData(0));
    }
};
 
SRecorder.get = function (callback) {
    if (callback) {
        if (navigator.getUserMedia) {
            navigator.getUserMedia(
                { audio: true },
                function (stream) {
                    var rec = new SRecorder(stream);
                    callback(rec);
                })
        }
    }
}
 
function receive(e) {
    audio.src = window.URL.createObjectURL(e);
}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
var a = document.getElementById(‘a’);
var b = document.getElementById(‘b’);
var c = document.getElementById(‘c’);
 
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia;
 
var gRecorder = null;
var audio = document.querySelector(‘audio’);
var door = false;
var ws = null;
 
b.onclick = function() {
    if(a.value === ”) {
        alert(‘请输入用户名’);
        return false;
    }
    if(!navigator.getUserMedia) {
        alert(‘抱歉您的设备无法语音聊天’);
        return false;
    }
 
    SRecorder.get(function (rec) {
        gRecorder = rec;
    });
 
    ws = new WebSocket("wss://x.x.x.x:8888");
 
    ws.onopen = function() {
        console.log(‘握手成功’);
        ws.send(‘user:’ + a.value);
    };
 
    ws.onmessage = function(e) {
        receive(e.data);
    };
 
    document.onkeydown = function(e) {
        if(e.keyCode === 65) {
            if(!door) {
                gRecorder.start();
                door = true;
            }
        }
    };
 
    document.onkeyup = function(e) {
        if(e.keyCode === 65) {
            if(door) {
                ws.send(gRecorder.getBlob());
                gRecorder.clear();
                gRecorder.stop();
                door = false;
            }
        }
    }
}
 
c.onclick = function() {
    if(ws) {
        ws.close();
    }
}
 
var SRecorder = function(stream) {
    config = {};
 
    config.sampleBits = config.smapleBits || 8;
    config.sampleRate = config.sampleRate || (44100 / 6);
 
    var context = new AudioContext();
    var audioInput = context.createMediaStreamSource(stream);
    var recorder = context.createScriptProcessor(4096, 1, 1);
 
    var audioData = {
        size: 0          //录音文件长度
        , buffer: []     //录音缓存
        , inputSampleRate: context.sampleRate    //输入采样率
        , inputSampleBits: 16       //输入采样数位 8, 16
        , outputSampleRate: config.sampleRate    //输出采样率
        , oututSampleBits: config.sampleBits       //输出采样数位 8, 16
        , clear: function() {
            this.buffer = [];
            this.size = 0;
        }
        , input: function (data) {
            this.buffer.push(new Float32Array(data));
            this.size += data.length;
        }
        , compress: function () { //合并压缩
            //合并
            var data = new Float32Array(this.size);
            var offset = 0;
            for (var i = 0; i < this.buffer.length; i++) {
                data.set(this.buffer[i], offset);
                offset += this.buffer[i].length;
            }
            //压缩
            var compression = parseInt(this.inputSampleRate / this.outputSampleRate);
            var length = data.length / compression;
            var result = new Float32Array(length);
            var index = 0, j = 0;
            while (index < length) {
                result[index] = data[j];
                j += compression;
                index++;
            }
            return result;
        }
        , encodeWAV: function () {
            var sampleRate = Math.min(this.inputSampleRate, this.outputSampleRate);
            var sampleBits = Math.min(this.inputSampleBits, this.oututSampleBits);
            var bytes = this.compress();
            var dataLength = bytes.length * (sampleBits / 8);
            var buffer = new ArrayBuffer(44 + dataLength);
            var data = new DataView(buffer);
 
            var channelCount = 1;//单声道
            var offset = 0;
 
            var writeString = function (str) {
                for (var i = 0; i < str.length; i++) {
                    data.setUint8(offset + i, str.charCodeAt(i));
                }
            };
 
            // 资源交换文件标识符
            writeString(‘RIFF’); offset += 4;
            // 下个地址开始到文件尾总字节数,即文件大小-8
            data.setUint32(offset, 36 + dataLength, true); offset += 4;
            // WAV文件标志
            writeString(‘WAVE’); offset += 4;
            // 波形格式标志
            writeString(‘fmt ‘); offset += 4;
            // 过滤字节,一般为 0x10 = 16
            data.setUint32(offset, 16, true); offset += 4;
            // 格式类别 (PCM形式采样数据)
            data.setUint16(offset, 1, true); offset += 2;
            // 通道数
            data.setUint16(offset, channelCount, true); offset += 2;
            // 采样率,每秒样本数,表示每个通道的播放速度
            data.setUint32(offset, sampleRate, true); offset += 4;
            // 波形数据传输率 (每秒平均字节数) 单声道×每秒数据位数×每样本数据位/8
            data.setUint32(offset, channelCount * sampleRate * (sampleBits / 8), true); offset += 4;
            // 快数据调整数 采样一次占用字节数 单声道×每样本的数据位数/8
            data.setUint16(offset, channelCount * (sampleBits / 8), true); offset += 2;
            // 每样本数据位数
            data.setUint16(offset, sampleBits, true); offset += 2;
            // 数据标识符
            writeString(‘data’); offset += 4;
            // 采样数据总数,即数据总大小-44
            data.setUint32(offset, dataLength, true); offset += 4;
            // 写入采样数据
            if (sampleBits === 8) {
                for (var i = 0; i < bytes.length; i++, offset++) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    var val = s < 0 ? s * 0x8000 : s * 0x7FFF;
                    val = parseInt(255 / (65535 / (val + 32768)));
                    data.setInt8(offset, val, true);
                }
            } else {
                for (var i = 0; i < bytes.length; i++, offset += 2) {
                    var s = Math.max(-1, Math.min(1, bytes[i]));
                    data.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true);
                }
            }
 
            return new Blob([data], { type: ‘audio/wav’ });
        }
    };
 
    this.start = function () {
        audioInput.connect(recorder);
        recorder.connect(context.destination);
    }
 
    this.stop = function () {
        recorder.disconnect();
    }
 
    this.getBlob = function () {
        return audioData.encodeWAV();
    }
 
    this.clear = function() {
        audioData.clear();
    }
 
    recorder.onaudioprocess = function (e) {
        audioData.input(e.inputBuffer.getChannelData(0));
    }
};
 
SRecorder.get = function (callback) {
    if (callback) {
        if (navigator.getUserMedia) {
            navigator.getUserMedia(
                { audio: true },
                function (stream) {
                    var rec = new SRecorder(stream);
                    callback(rec);
                })
        }
    }
}
 
function receive(e) {
    audio.src = window.URL.createObjectURL(e);
}

注意:按住a键说话,放开a键发送

注意:按住a键说话,放开a键发送

注意:按住a键说话,放开a键发送

投机有尝试不按键实时对讲,通过setInterval发送,但发现杂音有点重,效果不佳,那个必要encodeWAV再一层的包装,多去除环境杂音的效用,自己挑选了更进一步便利的按键说话的情势

投机有尝试不按键实时对讲,通过setInterval发送,但发现杂音有点重,效果糟糕,这一个须要encodeWAV再一层的包裹,多去除环境杂音的法力,自己挑选了更进一步简便易行的按键说话的情势

友好有尝试不按键实时对讲,通过setInterval发送,但发现杂音有点重,效果不好,那个须求encodeWAV再一层的包裹,多去除环境杂音的作用,自己选取了更为便捷的按键说话的形式

 

 

 

那篇小说里第一展望了websocket的将来,然后根据正规大家团结一心尝试解析和转变数据帧,对websocket有了更深一步的打听

那篇小说里第一展望了websocket的前景,然后根据标准我们温馨尝试解析和变化数据帧,对websocket有了更深一步的垂询

这篇小说里首先展望了websocket的前程,然后按照专业大家团结尝尝解析和变化数据帧,对websocket有了更深一步的精通

末尾经过四个demo看到了websocket的潜力,关于语音聊天室的demo涉及的较广,没有接触过奥迪(Audi)oContext对象的校友最好先领会下奥迪(Audi)oContext

末段经过多少个demo看到了websocket的潜力,关于语音聊天室的demo涉及的较广,没有接触过奥迪(Audi)oContext对象的同窗最好先精晓下奥迪oContext

终极经过四个demo看到了websocket的潜力,关于语音聊天室的demo涉及的较广,没有接触过奥迪oContext对象的同学最好先驾驭下奥迪(Audi)oContext

文章到此处就病逝啦~有怎么着想法和题材欢迎我们提议来一起谈谈探索~

作品到这边就得了啦~有怎样想法和难题欢迎大家提议来一起琢磨探索~

文章到此地就得了啦~有如何想法和题材欢迎我们提出来一起座谈探索~

 

 

 

1 赞 11 收藏 3
评论

1 赞 11 收藏 3
评论

1 赞 11 收藏 3
评论

Leave a Comment.