热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

pythontorchexp_Pythontorch.le方法代码示例

本文整理汇总了Python中torch.le方法的典型用法代码示例。如果您正苦于以下问题:Pythontorch.le方法的具体用法?Pythontor

本文整理汇总了Python中torch.le方法的典型用法代码示例。如果您正苦于以下问题:Python torch.le方法的具体用法?Python torch.le怎么用?Python torch.le使用的例子?那么恭喜您, 这里精选的方法代码示例或许可以为您提供帮助。您也可以进一步了解该方法所在模块torch的用法示例。

在下文中一共展示了torch.le方法的25个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Python代码示例。

示例1: preProc2

​点赞 6

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def preProc2(x):

# Access the global variables

global P, expP, negExpP

P = P.type_as(x)

expP = expP.type_as(x)

negExpP = negExpP.type_as(x)

# Create a variable filled with -1. Second part of the condition

z = Variable(torch.zeros(x.size())).type_as(x)

absX = torch.abs(x)

cond1 = torch.gt(absX, negExpP)

cond2 = torch.le(absX, negExpP)

if (torch.sum(cond1) > 0).data.all():

x1 = torch.sign(x[cond1])

z[cond1] = x1

if (torch.sum(cond2) > 0).data.all():

x2 = x[cond2]*expP

z[cond2] = x2

return z

开发者ID:gitabcworld,项目名称:FewShotLearning,代码行数:21,

示例2: loss_per_level

​点赞 5

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def loss_per_level(self, estDisp, gtDisp, label):

N, C, H, W = estDisp.shape

scaled_gtDisp = gtDisp

scale = 1.0

if gtDisp.shape[-2] != H or gtDisp.shape[-1] != W:

# compute scale per level and scale gtDisp

scale = gtDisp.shape[-1] / (W * 1.0)

scaled_gtDisp = gtDisp / scale

scaled_gtDisp = self.scale_func(scaled_gtDisp, (H, W))

# mask for valid disparity

# (start disparity, max disparity / scale)

# Attention: the invalid disparity of KITTI is set as 0, be sure to mask it out

mask &#61; (scaled_gtDisp > self.start_disp) & (scaled_gtDisp <(self.max_disp / scale))

if mask.sum() <1.0:

print(&#39;Relative loss: there is no point\&#39;s disparity is in ({},{})!&#39;.format(self.start_disp,

self.max_disp / scale))

loss &#61; (torch.abs(estDisp - scaled_gtDisp) * mask.float()).mean()

return loss

# relative loss

valid_pixel_number &#61; mask.float().sum()

diff &#61; scaled_gtDisp[mask] - estDisp[mask]

label &#61; label[mask]

# some value which is over large for torch.exp() is not suitable for soft margin loss

# get absolute value great than 66

over_large_mask &#61; torch.gt(torch.abs(diff), 66)

over_large_diff &#61; diff[over_large_mask]

# get absolute value smaller than 66

proper_mask &#61; torch.le(torch.abs(diff), 66)

proper_diff &#61; diff[proper_mask]

# generate lable for soft margin loss

label &#61; label[proper_mask]

loss &#61; F.soft_margin_loss(proper_diff, label, reduction&#61;&#39;sum&#39;) &#43; torch.abs(over_large_diff).sum()

loss &#61; loss / valid_pixel_number

return loss

开发者ID:DeepMotionAIResearch&#xff0c;项目名称:DenseMatchingBenchmark&#xff0c;代码行数:39&#xff0c;

示例3: pck

​点赞 5

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def pck(source_points,warped_points,L_pck,alpha&#61;0.1):

# compute precentage of correct keypoints

batch_size&#61;source_points.size(0)

pck&#61;torch.zeros((batch_size))

for i in range(batch_size):

p_src &#61; source_points[i,:]

p_wrp &#61; warped_points[i,:]

N_pts &#61; torch.sum(torch.ne(p_src[0,:],-1)*torch.ne(p_src[1,:],-1))

point_distance &#61; torch.pow(torch.sum(torch.pow(p_src[:,:N_pts]-p_wrp[:,:N_pts],2),0),0.5)

L_pck_mat &#61; L_pck[i].expand_as(point_distance)

correct_points &#61; torch.le(point_distance,L_pck_mat*alpha)

pck[i]&#61;torch.mean(correct_points.float())

return pck

开发者ID:ignacio-rocco&#xff0c;项目名称:weakalign&#xff0c;代码行数:15&#xff0c;

示例4: distance_bin

​点赞 5

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def distance_bin(self, mention_distance):

bins &#61; torch.zeros(mention_distance.size()).byte().to(self.device)

rg &#61; [[1, 1], [2, 2], [3, 3], [4, 4], [5, 7], [8, 15], [16, 31], [32, 63], [64, 300]]

for t, k in enumerate(rg):

i, j &#61; k[0], k[1]

b &#61; torch.LongTensor([i]).unsqueeze(-1).expand(mention_distance.size()).to(self.device)

m1 &#61; torch.ge(mention_distance, b)

e &#61; torch.LongTensor([j]).unsqueeze(-1).expand(mention_distance.size()).to(self.device)

m2 &#61; torch.le(mention_distance, e)

bins &#61; bins &#43; (t &#43; 1) * (m1 & m2)

return bins.long()

开发者ID:fastnlp&#xff0c;项目名称:fastNLP&#xff0c;代码行数:13&#xff0c;

示例5: _siamese_metrics

​点赞 5

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def _siamese_metrics(output, label, margin&#61;1):

l2_dist_tensor &#61; torch.from_numpy(output.data.cpu().numpy())

label_tensor &#61; torch.from_numpy(label.data.cpu().numpy())

# Distance

is_pos &#61; torch.ByteTensor()

POS_LABEL &#61; 1

NEG_LABEL &#61; 0

torch.eq(label_tensor, POS_LABEL, out&#61;is_pos) # y&#61;&#61;1

pos_dist &#61; 0 if len(l2_dist_tensor[is_pos]) &#61;&#61; 0 else l2_dist_tensor[is_pos].mean()

neg_dist &#61; 0 if len(l2_dist_tensor[~is_pos]) &#61;&#61; 0 else l2_dist_tensor[~is_pos].mean()

# print(&#39;same dis : diff dis {} : {}&#39;.format(l2_dist_tensor[is_pos &#61;&#61; 0].mean(), l2_dist_tensor[is_pos].mean()))

# accuracy

pred_pos_flags &#61; torch.ByteTensor()

torch.le(l2_dist_tensor, margin, out&#61;pred_pos_flags) # y&#61;&#61;1&#39;s idx

cur_score &#61; torch.FloatTensor(label.size(0))

cur_score.fill_(NEG_LABEL)

cur_score[pred_pos_flags] &#61; POS_LABEL

label_tensor_ &#61; label_tensor.type(torch.FloatTensor)

accuracy &#61; torch.eq(cur_score, label_tensor_).sum() / label_tensor.size(0)

metrics &#61; {

&#39;accuracy&#39;: accuracy,

&#39;pos_dist&#39;: pos_dist,

&#39;neg_dist&#39;: neg_dist,

}

return metrics

开发者ID:Erotemic&#xff0c;项目名称:ibeis&#xff0c;代码行数:33&#xff0c;

示例6: NegativeLogLoss

​点赞 5

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def NegativeLogLoss(y_pred, y_true):

"""

Shape:

- y_pred: batch x time

- y_true: batch

"""

y_true_onehot &#61; to_one_hot(y_true.unsqueeze(-1), y_pred.size(1))

P &#61; y_true_onehot.squeeze(-1) * y_pred # batch x time

P &#61; torch.sum(P, dim&#61;1) # batch

gt_zero &#61; torch.gt(P, 0.0).float() # batch

epsilon &#61; torch.le(P, 0.0).float() * 1e-8 # batch

log_P &#61; torch.log(P &#43; epsilon) * gt_zero # batch

output &#61; -log_P # batch

return output

开发者ID:xingdi-eric-yuan&#xff0c;项目名称:qait_public&#xff0c;代码行数:16&#xff0c;

示例7: forward

​点赞 5

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def forward(self, y_pred, y_true):

_assert_no_grad(y_true)

P &#61; y_true.float() * y_pred # batch x time x class

P &#61; torch.sum(P, dim&#61;1) # batch x class

gt_zero &#61; torch.gt(P, 0.0).float() # batch x class

epsilon &#61; torch.le(P, 0.0).float() * _eps # batch x class

log_P &#61; torch.log(P &#43; epsilon) * gt_zero # batch x class

sum_log_P &#61; torch.sum(log_P, dim&#61;1) # n_b

return -sum_log_P

开发者ID:xingdi-eric-yuan&#xff0c;项目名称:MatchLSTM-PyTorch&#xff0c;代码行数:11&#xff0c;

示例8: neg_log_obj

​点赞 5

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def neg_log_obj(self, words, word_seq_lens, batch_context_emb, chars, char_seq_lens, adj_matrixs, adjs_in, adjs_out, graphs, dep_label_adj, batch_dep_heads, tags, batch_dep_label, trees&#61;None):

features &#61; self.neural_scoring(words, word_seq_lens, batch_context_emb, chars, char_seq_lens, adj_matrixs, adjs_in, adjs_out, graphs, dep_label_adj, batch_dep_heads, batch_dep_label, trees)

all_scores &#61; self.calculate_all_scores(features)

batch_size &#61; words.size(0)

sent_len &#61; words.size(1)

maskTemp &#61; torch.arange(1, sent_len &#43; 1, dtype&#61;torch.long).view(1, sent_len).expand(batch_size, sent_len).to(self.device)

mask &#61; torch.le(maskTemp, word_seq_lens.view(batch_size, 1).expand(batch_size, sent_len)).to(self.device)

unlabed_score &#61; self.forward_unlabeled(all_scores, word_seq_lens, mask)

labeled_score &#61; self.forward_labeled(all_scores, word_seq_lens, tags, mask)

return unlabed_score - labeled_score

开发者ID:allanj&#xff0c;项目名称:ner_with_dependency&#xff0c;代码行数:16&#xff0c;

示例9: _compute_loss

​点赞 5

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def _compute_loss(self, prediction_tensor, target_tensor, weights&#61;None):

"""Compute loss function.

Args:

prediction_tensor: A float tensor of shape [batch_size, num_anchors,

code_size] representing the (encoded) predicted locations of objects.

target_tensor: A float tensor of shape [batch_size, num_anchors,

code_size] representing the regression targets

weights: a float tensor of shape [batch_size, num_anchors]

Returns:

loss: a float tensor of shape [batch_size, num_anchors] tensor

representing the value of the loss function.

"""

diff &#61; prediction_tensor - target_tensor

if self._code_weights is not None:

code_weights &#61; self._code_weights.type_as(prediction_tensor).to(target_tensor.device)

diff &#61; code_weights.view(1, 1, -1) * diff

abs_diff &#61; torch.abs(diff)

abs_diff_lt_1 &#61; torch.le(abs_diff, 1 / (self._sigma**2)).type_as(abs_diff)

loss &#61; abs_diff_lt_1 * 0.5 * torch.pow(abs_diff * self._sigma, 2) \

&#43; (abs_diff - 0.5 / (self._sigma**2)) * (1. - abs_diff_lt_1)

if self._codewise:

anchorwise_smooth_l1norm &#61; loss

if weights is not None:

anchorwise_smooth_l1norm *&#61; weights.unsqueeze(-1)

else:

anchorwise_smooth_l1norm &#61; torch.sum(loss, 2)# * weights

if weights is not None:

anchorwise_smooth_l1norm *&#61; weights

return anchorwise_smooth_l1norm

开发者ID:traveller59&#xff0c;项目名称:second.pytorch&#xff0c;代码行数:33&#xff0c;

示例10: le

​点赞 5

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def le(t1, t2):

"""

Element-wise rich less than or equal comparison between values from operand t1 with respect to values of

operand t2 (i.e. t1 <&#61; t2), not commutative.

Takes the first and second operand (scalar or tensor) whose elements are to be compared as argument.

Parameters

----------

t1: tensor or scalar

The first operand to be compared less than or equal to second operand

t2: tensor or scalar

The second operand to be compared greater than or equal to first operand

Returns

-------

result: ht.DNDarray

A uint8-tensor holding 1 for all elements in which values of t1 are less than or equal to values of t2,

0 for all other elements

Examples

-------

>>> import heat as ht

>>> T1 &#61; ht.float32([[1, 2],[3, 4]])

>>> ht.le(T1, 3.0)

tensor([[1, 1],

[1, 0]], dtype&#61;torch.uint8)

>>> T2 &#61; ht.float32([[2, 2], [2, 2]])

>>> ht.le(T1, T2)

tensor([[1, 1],

[0, 0]], dtype&#61;torch.uint8)

"""

return operations.__binary_op(torch.le, t1, t2)

开发者ID:helmholtz-analytics&#xff0c;项目名称:heat&#xff0c;代码行数:35&#xff0c;

示例11: compute

​点赞 5

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def compute(self, left, right) -> torch.Tensor:

return torch.le(left, right)

开发者ID:Heerozh&#xff0c;项目名称:spectre&#xff0c;代码行数:4&#xff0c;

示例12: _bound_logvar_lookup

​点赞 5

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def _bound_logvar_lookup(self):

self.logvar_lookup.weight.data[torch.le(self.logvar_lookup.weight, self.logvar_bound)] &#61; self.logvar_bound

开发者ID:yjlolo&#xff0c;项目名称:vae-audio&#xff0c;代码行数:4&#xff0c;

示例13: test_random_uniform_boundaries

​点赞 5

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def test_random_uniform_boundaries(dtype):

lb &#61; 1.2

ub &#61; 4.8

backend &#61; pytorch_backend.PyTorchBackend()

a &#61; backend.random_uniform((4, 4), seed&#61;10, dtype&#61;dtype)

b &#61; backend.random_uniform((4, 4), (lb, ub), seed&#61;10, dtype&#61;dtype)

assert (torch.ge(a, 0).byte().all() and torch.le(a, 1).byte().all() and

torch.ge(b, lb).byte().all() and torch.le(b, ub).byte().all())

开发者ID:google&#xff0c;项目名称:TensorNetwork&#xff0c;代码行数:10&#xff0c;

示例14: stable_cosine_distance

​点赞 5

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def stable_cosine_distance(a, b, squared&#61;True):

"""Computes the pairwise distance matrix with numerical stability."""

mat &#61; torch.cat([a, b])

pairwise_distances_squared &#61; torch.add(

mat.pow(2).sum(dim&#61;1, keepdim&#61;True).expand(mat.size(0), -1),

torch.t(mat).pow(2).sum(dim&#61;0, keepdim&#61;True).expand(mat.size(0), -1)

) - 2 * (torch.mm(mat, torch.t(mat)))

# Deal with numerical inaccuracies. Set small negatives to zero.

pairwise_distances_squared &#61; torch.clamp(pairwise_distances_squared, min&#61;0.0)

# Get the mask where the zero distances are at.

error_mask &#61; torch.le(pairwise_distances_squared, 0.0)

# Optionally take the sqrt.

if squared:

pairwise_distances &#61; pairwise_distances_squared

else:

pairwise_distances &#61; torch.sqrt(pairwise_distances_squared &#43; error_mask.float() * 1e-16)

# Undo conditionally adding 1e-16.

pairwise_distances &#61; torch.mul(pairwise_distances, (error_mask &#61;&#61; False).float())

# Explicitly set diagonals to zero.

mask_offdiagonals &#61; 1 - torch.eye(*pairwise_distances.size(), device&#61;pairwise_distances.device)

pairwise_distances &#61; torch.mul(pairwise_distances, mask_offdiagonals)

return pairwise_distances[:a.shape[0], a.shape[0]:]

开发者ID:arthurdouillard&#xff0c;项目名称:incremental_learning.pytorch&#xff0c;代码行数:31&#xff0c;

示例15: _pairwise_distance

​点赞 5

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def _pairwise_distance(a, squared&#61;False):

"""Computes the pairwise distance matrix with numerical stability."""

pairwise_distances_squared &#61; torch.add(

a.pow(2).sum(dim&#61;1, keepdim&#61;True).expand(a.size(0), -1),

torch.t(a).pow(2).sum(dim&#61;0, keepdim&#61;True).expand(a.size(0), -1)

) - 2 * (torch.mm(a, torch.t(a)))

# Deal with numerical inaccuracies. Set small negatives to zero.

pairwise_distances_squared &#61; torch.clamp(pairwise_distances_squared, min&#61;0.0)

# Get the mask where the zero distances are at.

error_mask &#61; torch.le(pairwise_distances_squared, 0.0)

# Optionally take the sqrt.

if squared:

pairwise_distances &#61; pairwise_distances_squared

else:

pairwise_distances &#61; torch.sqrt(pairwise_distances_squared &#43; error_mask.float() * 1e-16)

# Undo conditionally adding 1e-16.

pairwise_distances &#61; torch.mul(pairwise_distances, (error_mask &#61;&#61; False).float())

# Explicitly set diagonals to zero.

mask_offdiagonals &#61; 1 - torch.eye(*pairwise_distances.size(), device&#61;pairwise_distances.device)

pairwise_distances &#61; torch.mul(pairwise_distances, mask_offdiagonals)

return pairwise_distances

开发者ID:arthurdouillard&#xff0c;项目名称:incremental_learning.pytorch&#xff0c;代码行数:29&#xff0c;

示例16: _compute_loss

​点赞 5

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def _compute_loss(self, prediction_tensor, target_tensor, weights&#61;None):

"""Compute loss function.

Args:

prediction_tensor: A float tensor of shape [batch_size, num_anchors,

code_size] representing the (encoded) predicted locations of objects.

target_tensor: A float tensor of shape [batch_size, num_anchors,

code_size] representing the regression targets

weights: a float tensor of shape [batch_size, num_anchors]

Returns:

loss: a float tensor of shape [batch_size, num_anchors] tensor

representing the value of the loss function.

"""

diff &#61; prediction_tensor - target_tensor

if self._code_weights is not None:

code_weights &#61; self._code_weights.type_as(prediction_tensor)

diff &#61; code_weights.view(1, 1, -1) * diff

abs_diff &#61; torch.abs(diff)

abs_diff_lt_1 &#61; torch.le(abs_diff, 1 / (self._sigma**2)).type_as(abs_diff)

loss &#61; abs_diff_lt_1 * 0.5 * torch.pow(abs_diff * self._sigma, 2) \

&#43; (abs_diff - 0.5 / (self._sigma**2)) * (1. - abs_diff_lt_1)

if self._codewise:

anchorwise_smooth_l1norm &#61; loss

if weights is not None:

anchorwise_smooth_l1norm *&#61; weights.unsqueeze(-1)

else:

anchorwise_smooth_l1norm &#61; torch.sum(loss, 2)# * weights

if weights is not None:

anchorwise_smooth_l1norm *&#61; weights

return anchorwise_smooth_l1norm

开发者ID:SmallMunich&#xff0c;项目名称:nutonomy_pointpillars&#xff0c;代码行数:33&#xff0c;

示例17: _compute_fake_acc

​点赞 5

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def _compute_fake_acc(predictions):

predictions &#61; torch.le(predictions.data, 0.5)

if len(predictions.size()) &#61;&#61; 3:

predictions &#61; predictions.view(predictions.size(0) * predictions.size(1) * predictions.size(2))

acc &#61; (predictions &#61;&#61; 1).sum() / (1.0 * predictions.size(0))

return acc

开发者ID:masabdi&#xff0c;项目名称:LSPS&#xff0c;代码行数:8&#xff0c;

示例18: forward

​点赞 5

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def forward(self, prediction_tensor, target_tensor, weights&#61;None):

"""Compute loss function.

Args:

prediction_tensor: A float tensor of shape [batch_size, num_anchors,

code_size] representing the (encoded) predicted locations of objects.

target_tensor: A float tensor of shape [batch_size, num_anchors,

code_size] representing the regression targets

weights: a float tensor of shape [batch_size, num_anchors]

Returns:

loss: a float tensor of shape [batch_size, num_anchors] tensor

representing the value of the loss function.

"""

diff &#61; prediction_tensor - target_tensor

if self._code_weights is not None:

# code_weights &#61; self._code_weights.type_as(prediction_tensor).to(diff.device)

diff &#61; self._code_weights.view(1, 1, -1).to(diff.device) * diff

abs_diff &#61; torch.abs(diff)

abs_diff_lt_1 &#61; torch.le(abs_diff, 1 / (self._sigma ** 2)).type_as(abs_diff)

loss &#61; abs_diff_lt_1 * 0.5 * torch.pow(abs_diff * self._sigma, 2) &#43; (

abs_diff - 0.5 / (self._sigma ** 2)

) * (1.0 - abs_diff_lt_1)

if self._codewise:

anchorwise_smooth_l1norm &#61; loss

if weights is not None:

anchorwise_smooth_l1norm *&#61; weights.unsqueeze(-1)

else:

anchorwise_smooth_l1norm &#61; torch.sum(loss, 2) # * weights

if weights is not None:

anchorwise_smooth_l1norm *&#61; weights

return anchorwise_smooth_l1norm

开发者ID:poodarchu&#xff0c;项目名称:Det3D&#xff0c;代码行数:35&#xff0c;

示例19: evaluateError

​点赞 4

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def evaluateError(output, target):

# f &#61; open(&#39;./record.txt&#39;, &#39;w&#39;)

errors &#61; {&#39;MSE&#39;: 0, &#39;RMSE&#39;: 0, &#39;ABS_REL&#39;: 0, &#39;LG10&#39;: 0,

&#39;MAE&#39;: 0, &#39;DELTA1&#39;: 0, &#39;DELTA2&#39;: 0, &#39;DELTA3&#39;: 0}

_output, _target, nanMask, nValidElement &#61; setNanToZero(output, target)

#

if (nValidElement.data.cpu().numpy() > 0):

diffMatrix &#61; torch.abs(_output - _target)

errors[&#39;MSE&#39;] &#61; torch.sum(torch.pow(diffMatrix, 2)) / nValidElement

errors[&#39;RMSE&#39;] &#61; torch.sqrt(errors[&#39;MSE&#39;])

errors[&#39;MAE&#39;] &#61; torch.sum(diffMatrix) / nValidElement

realMatrix &#61; torch.div(diffMatrix, _target)

realMatrix[nanMask] &#61; 0

errors[&#39;ABS_REL&#39;] &#61; torch.sum(realMatrix) / nValidElement

#del realMatrix

#del diffMatrix

LG10Matrix &#61; torch.abs(lg10(_output) - lg10(_target))

LG10Matrix[nanMask] &#61; 0

errors[&#39;LG10&#39;] &#61; torch.sum(LG10Matrix) / nValidElement

#del LG10Matrix

yOverZ &#61; torch.div(_output, _target)

zOverY &#61; torch.div(_target, _output)

maxRatio &#61; maxOfTwo(yOverZ, zOverY)

errors[&#39;DELTA1&#39;] &#61; torch.sum(

torch.le(maxRatio, 1.25).float()) / nValidElement

errors[&#39;DELTA2&#39;] &#61; torch.sum(

torch.le(maxRatio, math.pow(1.25, 2)).float()) / nValidElement

errors[&#39;DELTA3&#39;] &#61; torch.sum(

torch.le(maxRatio, math.pow(1.25, 3)).float()) / nValidElement

errors[&#39;MSE&#39;] &#61; float(errors[&#39;MSE&#39;].data.cpu().numpy())

errors[&#39;RMSE&#39;] &#61; float(errors[&#39;RMSE&#39;].data.cpu().numpy())

errors[&#39;ABS_REL&#39;] &#61; float(errors[&#39;ABS_REL&#39;].data.cpu().numpy())

errors[&#39;LG10&#39;] &#61; float(errors[&#39;LG10&#39;].data.cpu().numpy())

errors[&#39;MAE&#39;] &#61; float(errors[&#39;MAE&#39;].data.cpu().numpy())

# errors[&#39;PERC&#39;] &#61; float(errors[&#39;PERC&#39;].data.cpu().numpy())

errors[&#39;DELTA1&#39;] &#61; float(errors[&#39;DELTA1&#39;].data.cpu().numpy())

errors[&#39;DELTA2&#39;] &#61; float(errors[&#39;DELTA2&#39;].data.cpu().numpy())

errors[&#39;DELTA3&#39;] &#61; float(errors[&#39;DELTA3&#39;].data.cpu().numpy())

#del yOverZ, zOverY, maxRatio

# f.write(&#39; nValidElement &#61; &#39; &#43; str(nValidElement) &#43; &#39; _output &#39; &#43; str(_output) &#43; &#39; _target &#39; &#43; str(_target) &#43; &#39;maxRatio &#39; &#43; str(maxRatio) &#43; &#39;torch.le(maxRatio, 1.25).float()&#39; &#43; str(torch.le(maxRatio, 1.25).float()) &#43; &#39;\n&#39;)

#pdb.set_trace()

return errors

开发者ID:JunjH&#xff0c;项目名称:Visualizing-CNNs-for-monocular-depth-estimation&#xff0c;代码行数:61&#xff0c;

示例20: test_fss_class

​点赞 4

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def test_fss_class(op):

class_ &#61; {"eq": DPF, "le": DIF}[op]

th_op &#61; {"eq": th.eq, "le": th.le}[op]

gather_op &#61; {"eq": "__add__", "le": "__xor__"}[op]

# single value

primitive &#61; class_.keygen(n_values&#61;1)

alpha, s_00, s_01, *CW &#61; primitive

mask &#61; th.randint(0, 2 ** n, alpha.shape)

k0, k1 &#61; [((alpha - mask) % 2 ** n, s_00, *CW), (mask, s_01, *CW)]

x &#61; th.tensor([0])

x_masked &#61; x &#43; k0[0] &#43; k1[0]

y0 &#61; class_.eval(0, x_masked, *k0[1:])

y1 &#61; class_.eval(1, x_masked, *k1[1:])

assert (getattr(y0, gather_op)(y1) &#61;&#61; th_op(x, 0)).all()

# 1D tensor

primitive &#61; class_.keygen(n_values&#61;3)

alpha, s_00, s_01, *CW &#61; primitive

mask &#61; th.randint(0, 2 ** n, alpha.shape)

k0, k1 &#61; [((alpha - mask) % 2 ** n, s_00, *CW), (mask, s_01, *CW)]

x &#61; th.tensor([0, 2, -2])

x_masked &#61; x &#43; k0[0] &#43; k1[0]

y0 &#61; class_.eval(0, x_masked, *k0[1:])

y1 &#61; class_.eval(1, x_masked, *k1[1:])

assert (getattr(y0, gather_op)(y1) &#61;&#61; th_op(x, 0)).all()

# 2D tensor

primitive &#61; class_.keygen(n_values&#61;4)

alpha, s_00, s_01, *CW &#61; primitive

mask &#61; th.randint(0, 2 ** n, alpha.shape)

k0, k1 &#61; [((alpha - mask) % 2 ** n, s_00, *CW), (mask, s_01, *CW)]

x &#61; th.tensor([[0, 2], [-2, 0]])

x_masked &#61; x &#43; k0[0].reshape(x.shape) &#43; k1[0].reshape(x.shape)

y0 &#61; class_.eval(0, x_masked, *k0[1:])

y1 &#61; class_.eval(1, x_masked, *k1[1:])

assert (getattr(y0, gather_op)(y1) &#61;&#61; th_op(x, 0)).all()

# 3D tensor

primitive &#61; class_.keygen(n_values&#61;8)

alpha, s_00, s_01, *CW &#61; primitive

mask &#61; th.randint(0, 2 ** n, alpha.shape)

k0, k1 &#61; [((alpha - mask) % 2 ** n, s_00, *CW), (mask, s_01, *CW)]

x &#61; th.tensor([[[0, 2], [-2, 0]], [[0, 2], [-2, 0]]])

x_masked &#61; x &#43; k0[0].reshape(x.shape) &#43; k1[0].reshape(x.shape)

y0 &#61; class_.eval(0, x_masked, *k0[1:])

y1 &#61; class_.eval(1, x_masked, *k1[1:])

assert (getattr(y0, gather_op)(y1) &#61;&#61; th_op(x, 0)).all()

开发者ID:OpenMined&#xff0c;项目名称:PySyft&#xff0c;代码行数:58&#xff0c;

示例21: pairwise_distance

​点赞 4

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def pairwise_distance(a, squared&#61;False):

"""Computes the pairwise distance matrix with numerical stability.

output[i, j] &#61; || feature[i, :] - feature[j, :] ||_2

Args:

feature: 2-D Tensor of size [number of data, feature dimension].

squared: Boolean, whether or not to square the pairwise distances.

Returns:

pairwise_distances: 2-D Tensor of size [number of data, number of data].

"""

a &#61; torch.as_tensor(np.atleast_2d(a))

pairwise_distances_squared &#61; torch.add(

a.pow(2).sum(dim&#61;1, keepdim&#61;True).expand(a.size(0), -1),

torch.t(a).pow(2).sum(dim&#61;0, keepdim&#61;True).expand(a.size(0), -1)

) - 2 * (

torch.mm(a, torch.t(a))

)

# Deal with numerical inaccuracies. Set small negatives to zero.

pairwise_distances_squared &#61; torch.clamp(

pairwise_distances_squared, min&#61;0.0

)

# Get the mask where the zero distances are at.

error_mask &#61; torch.le(pairwise_distances_squared, 0.0)

# Optionally take the sqrt.

if squared:

pairwise_distances &#61; pairwise_distances_squared

else:

pairwise_distances &#61; torch.sqrt(

pairwise_distances_squared &#43; error_mask.float() * 1e-16

)

# Undo conditionally adding 1e-16.

pairwise_distances &#61; torch.mul(

pairwise_distances,

(error_mask &#61;&#61; False).float()

)

# Explicitly set diagonals to zero.

mask_offdiagonals &#61; 1 - torch.eye(

*pairwise_distances.size(),

device&#61;pairwise_distances.device

)

pairwise_distances &#61; torch.mul(pairwise_distances, mask_offdiagonals).data.cpu().numpy()

return pairwise_distances

开发者ID:CompVis&#xff0c;项目名称:metric-learning-divide-and-conquer&#xff0c;代码行数:49&#xff0c;

示例22: class_balanced_cross_entropy_loss

​点赞 4

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def class_balanced_cross_entropy_loss(output, label, size_average&#61;True, batch_average&#61;True, void_pixels&#61;None):

"""Define the class balanced cross entropy loss to train the network

Args:

output: Output of the network

label: Ground truth label

size_average: return per-element (pixel) average loss

batch_average: return per-batch average loss

void_pixels: pixels to ignore from the loss

Returns:

Tensor that evaluates the loss

"""

assert(output.size() &#61;&#61; label.size())

labels &#61; torch.ge(label, 0.5).float()

num_labels_pos &#61; torch.sum(labels)

num_labels_neg &#61; torch.sum(1.0 - labels)

num_total &#61; num_labels_pos &#43; num_labels_neg

output_gt_zero &#61; torch.ge(output, 0).float()

loss_val &#61; torch.mul(output, (labels - output_gt_zero)) - torch.log(

1 &#43; torch.exp(output - 2 * torch.mul(output, output_gt_zero)))

loss_pos_pix &#61; -torch.mul(labels, loss_val)

loss_neg_pix &#61; -torch.mul(1.0 - labels, loss_val)

if void_pixels is not None:

w_void &#61; torch.le(void_pixels, 0.5).float()

loss_pos_pix &#61; torch.mul(w_void, loss_pos_pix)

loss_neg_pix &#61; torch.mul(w_void, loss_neg_pix)

num_total &#61; num_total - torch.ge(void_pixels, 0.5).float().sum()

loss_pos &#61; torch.sum(loss_pos_pix)

loss_neg &#61; torch.sum(loss_neg_pix)

final_loss &#61; num_labels_neg / num_total * loss_pos &#43; num_labels_pos / num_total * loss_neg

if size_average:

final_loss /&#61; np.prod(label.size())

elif batch_average:

final_loss /&#61; label.size()[0]

return final_loss

开发者ID:jfzhang95&#xff0c;项目名称:DeepGrabCut-PyTorch&#xff0c;代码行数:45&#xff0c;

示例23: calcScores

​点赞 4

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def calcScores(network, data, thresholds):

# calculate labels

ind &#61; 0

meta &#61; []

for d in data:

meta &#43;&#61; [ind]*len(d)

ind &#43;&#61; 1

labels &#61; torch.LongTensor(meta)

# images have to be center cropped to right size from (288, 144) to (256, 128)

images &#61; []

transformation &#61; Compose([CenterCrop((256, 128)), ToTensor(),

Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])])

for d in data:

tens &#61; []

for im in d:

im &#61; cv2.cvtColor(im, cv2.COLOR_BGR2RGB)

im &#61; Image.fromarray(im)

im &#61; transformation(im)

tens.append(im)

images.append(torch.stack(tens, 0))

embeddings &#61; torch.cat([network(Variable(im.cuda(), volatile&#61;True)).data for im in images],0).cpu()

pos_mask &#61; _get_anchor_positive_triplet_mask(labels)

neg_mask &#61; _get_anchor_negative_triplet_mask(labels)

# compute pariwise square distance matrix

n &#61; embeddings.size(0)

m &#61; embeddings.size(0)

d &#61; embeddings.size(1)

x &#61; embeddings.unsqueeze(1).expand(n, m, d)

y &#61; embeddings.unsqueeze(0).expand(n, m, d)

dist &#61; torch.sqrt(torch.pow(x - y, 2).sum(2))

pos_distances &#61; dist * pos_mask.float()

neg_distances &#61; dist * neg_mask.float()

num_pos &#61; pos_mask.sum()

num_neg &#61; neg_mask.sum()

# calculate the right classifications

for t in thresholds:

# every 0 entry is also le t so filter with mask here

pos_right &#61; torch.le(pos_distances, t) * pos_mask

pos_right &#61; pos_right.sum()

neg_right &#61; torch.gt(neg_distances, t).sum()

print("[*] Threshold set to: {}".format(t))

print("Positive right classifications: {:.2f}% {}/{}".format(pos_right/num_pos*100, pos_right, num_pos))

print("Negative right classifications: {:.2f}% {}/{}".format(neg_right/num_neg*100, neg_right, num_neg))

print("All right classifications: {:.2f}% {}/{}".format((pos_right&#43;neg_right)/(num_pos&#43;num_neg)*100,

pos_right&#43;neg_right, num_pos&#43;num_neg))

开发者ID:phil-bergmann&#xff0c;项目名称:tracking_wo_bnw&#xff0c;代码行数:55&#xff0c;

示例24: infer

​点赞 4

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def infer(self, memory, memory_lengths):

""" Decoder inference

PARAMS

------

memory: Encoder outputs

RETURNS

-------

mel_outputs: mel outputs from the decoder

gate_outputs: gate outputs from the decoder

alignments: sequence of attention weights from the decoder

"""

decoder_input &#61; self.get_go_frame(memory)

if memory.size(0) > 1:

mask &#61; ~get_mask_from_lengths(memory_lengths)

else:

mask &#61; None

self.initialize_decoder_states(memory, mask&#61;mask)

mel_lengths &#61; torch.zeros([memory.size(0)], dtype&#61;torch.int32)

not_finished &#61; torch.ones([memory.size(0)], dtype&#61;torch.int32)

if torch.cuda.is_available():

mel_lengths &#61; mel_lengths.cuda()

not_finished &#61; not_finished.cuda()

mel_outputs, gate_outputs, alignments &#61; [], [], []

while True:

decoder_input &#61; self.prenet(decoder_input, inference&#61;True)

mel_output, gate_output, alignment &#61; self.decode(decoder_input)

dec &#61; torch.le(torch.sigmoid(gate_output.data), self.gate_threshold).to(torch.int32).squeeze(1)

not_finished &#61; not_finished * dec

mel_lengths &#43;&#61; not_finished

if self.early_stopping and torch.sum(not_finished) &#61;&#61; 0:

break

mel_outputs &#43;&#61; [mel_output.squeeze(1)]

gate_outputs &#43;&#61; [gate_output]

alignments &#43;&#61; [alignment]

if len(mel_outputs) &#61;&#61; self.max_decoder_steps:

logging.warning("Reached max decoder steps %d.", self.max_decoder_steps)

break

decoder_input &#61; mel_output

mel_outputs, gate_outputs, alignments &#61; self.parse_decoder_outputs(mel_outputs, gate_outputs, alignments)

return mel_outputs, gate_outputs, alignments, mel_lengths

开发者ID:NVIDIA&#xff0c;项目名称:NeMo&#xff0c;代码行数:54&#xff0c;

示例25: forward

​点赞 4

# 需要导入模块: import torch [as 别名]

# 或者: from torch import le [as 别名]

def forward(self, heads, annotations):

alpha &#61; 0.25

gamma &#61; 2.0

if self.is_3D:

classifications, regressions, depthregressions &#61; heads

else:

classifications, regressions &#61; heads

#classifications,scalar,mu &#61; classifications_tuple

batch_size &#61; classifications.shape[0]

classification_losses &#61; []

regression_losses &#61; []

anchor &#61; self.all_anchors # num_anchors(w*h*A) x 2

anchor_regression_loss_tuple &#61; []

for j in range(batch_size):

classification &#61; classifications[j, :, :] #N*(w*h*A)*P

regression &#61; regressions[j, :, :, :] #N*(w*h*A)*P*2

if self.is_3D:

depthregression &#61; depthregressions[j, :, :]#N*(w*h*A)*P

bbox_annotation &#61; annotations[j, :, :]#N*P*3&#61;>P*3

reg_weight &#61; F.softmax(classification,dim&#61;0) #(w*h*A)*P

reg_weight_xy &#61; torch.unsqueeze(reg_weight,2).expand(reg_weight.shape[0],reg_weight.shape[1],2)#(w*h*A)*P*2

gt_xy &#61; bbox_annotation[:,:2]#P*2

anchor_diff &#61; torch.abs(gt_xy-(reg_weight_xy*torch.unsqueeze(anchor,1)).sum(0)) #P*2

anchor_loss &#61; torch.where(

torch.le(anchor_diff, 1),

0.5 * 1 * torch.pow(anchor_diff, 2),

anchor_diff - 0.5 / 1

)

anchor_regression_loss &#61; anchor_loss.mean()

anchor_regression_loss_tuple.append(anchor_regression_loss)

#######################regression 4 spatial###################

reg &#61; torch.unsqueeze(anchor,1) &#43; regression #(w*h*A)*P*2

regression_diff &#61; torch.abs(gt_xy-(reg_weight_xy*reg).sum(0)) #P*2

regression_loss &#61; torch.where(

torch.le(regression_diff, 1),

0.5 * 1 * torch.pow(regression_diff, 2),

regression_diff - 0.5 / 1

)

regression_loss &#61; regression_loss.mean()*self.spatialFactor

########################regression 4 depth###################

if self.is_3D:

gt_depth &#61; bbox_annotation[:,2] #P

regression_diff_depth &#61; torch.abs(gt_depth - (reg_weight*depthregression).sum(0))#(w*h*A)*P

regression_loss_depth &#61; torch.where(

torch.le(regression_diff_depth, 3),

0.5 * (1/3) * torch.pow(regression_diff_depth, 2),

regression_diff_depth - 0.5 / (1/3)

)

regression_loss &#43;&#61; regression_diff_depth.mean()

############################################################

regression_losses.append(regression_loss)

return torch.stack(anchor_regression_loss_tuple).mean(dim&#61;0, keepdim&#61;True), torch.stack(regression_losses).mean(dim&#61;0, keepdim&#61;True)

开发者ID:zhangboshen&#xff0c;项目名称:A2J&#xff0c;代码行数:58&#xff0c;

注&#xff1a;本文中的torch.le方法示例整理自Github/MSDocs等源码及文档管理平台&#xff0c;相关代码片段筛选自各路编程大神贡献的开源项目&#xff0c;源码版权归原作者所有&#xff0c;传播和使用请参考对应项目的License&#xff1b;未经允许&#xff0c;请勿转载。



推荐阅读
  • Python正则表达式学习记录及常用方法
    本文记录了学习Python正则表达式的过程,介绍了re模块的常用方法re.search,并解释了rawstring的作用。正则表达式是一种方便检查字符串匹配模式的工具,通过本文的学习可以掌握Python中使用正则表达式的基本方法。 ... [详细]
  • node.jsurlsearchparamsAPI哎哎哎 ... [详细]
  • 生成式对抗网络模型综述摘要生成式对抗网络模型(GAN)是基于深度学习的一种强大的生成模型,可以应用于计算机视觉、自然语言处理、半监督学习等重要领域。生成式对抗网络 ... [详细]
  • Linux重启网络命令实例及关机和重启示例教程
    本文介绍了Linux系统中重启网络命令的实例,以及使用不同方式关机和重启系统的示例教程。包括使用图形界面和控制台访问系统的方法,以及使用shutdown命令进行系统关机和重启的句法和用法。 ... [详细]
  • 深度学习中的Vision Transformer (ViT)详解
    本文详细介绍了深度学习中的Vision Transformer (ViT)方法。首先介绍了相关工作和ViT的基本原理,包括图像块嵌入、可学习的嵌入、位置嵌入和Transformer编码器等。接着讨论了ViT的张量维度变化、归纳偏置与混合架构、微调及更高分辨率等方面。最后给出了实验结果和相关代码的链接。本文的研究表明,对于CV任务,直接应用纯Transformer架构于图像块序列是可行的,无需依赖于卷积网络。 ... [详细]
  • IjustinheritedsomewebpageswhichusesMooTools.IneverusedMooTools.NowIneedtoaddsomef ... [详细]
  • 本文介绍了在处理不规则数据时如何使用Python自动提取文本中的时间日期,包括使用dateutil.parser模块统一日期字符串格式和使用datefinder模块提取日期。同时,还介绍了一段使用正则表达式的代码,可以支持中文日期和一些特殊的时间识别,例如'2012年12月12日'、'3小时前'、'在2012/12/13哈哈'等。 ... [详细]
  • 本文介绍了在Python张量流中使用make_merged_spec()方法合并设备规格对象的方法和语法,以及参数和返回值的说明,并提供了一个示例代码。 ... [详细]
  • 本文整理了315道Python基础题目及答案,帮助读者检验学习成果。文章介绍了学习Python的途径、Python与其他编程语言的对比、解释型和编译型编程语言的简述、Python解释器的种类和特点、位和字节的关系、以及至少5个PEP8规范。对于想要检验自己学习成果的读者,这些题目将是一个不错的选择。请注意,答案在视频中,本文不提供答案。 ... [详细]
  • 本文介绍了利用ARMA模型对平稳非白噪声序列进行建模的步骤及代码实现。首先对观察值序列进行样本自相关系数和样本偏自相关系数的计算,然后根据这些系数的性质选择适当的ARMA模型进行拟合,并估计模型中的位置参数。接着进行模型的有效性检验,如果不通过则重新选择模型再拟合,如果通过则进行模型优化。最后利用拟合模型预测序列的未来走势。文章还介绍了绘制时序图、平稳性检验、白噪声检验、确定ARMA阶数和预测未来走势的代码实现。 ... [详细]
  • Non-ASCIIhelponitsownisOK: ... [详细]
  • Answer:Theterm“backslash”isonofthemostincorrectlyusedtermsincomputing.People ... [详细]
  • PHP图片截取方法及应用实例
    本文介绍了使用PHP动态切割JPEG图片的方法,并提供了应用实例,包括截取视频图、提取文章内容中的图片地址、裁切图片等问题。详细介绍了相关的PHP函数和参数的使用,以及图片切割的具体步骤。同时,还提供了一些注意事项和优化建议。通过本文的学习,读者可以掌握PHP图片截取的技巧,实现自己的需求。 ... [详细]
  • 本文介绍了设计师伊振华受邀参与沈阳市智慧城市运行管理中心项目的整体设计,并以数字赋能和创新驱动高质量发展的理念,建设了集成、智慧、高效的一体化城市综合管理平台,促进了城市的数字化转型。该中心被称为当代城市的智能心脏,为沈阳市的智慧城市建设做出了重要贡献。 ... [详细]
  • Commit1ced2a7433ea8937a1b260ea65d708f32ca7c95eintroduceda+Clonetraitboundtom ... [详细]
author-avatar
美好生活的日子
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有